00:00:00.001 Started by upstream project "autotest-nightly" build number 4343 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3706 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.138 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.215 Fetching changes from the remote Git repository 00:00:00.218 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.283 Using shallow fetch with depth 1 00:00:00.283 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.283 > git --version # timeout=10 00:00:00.333 > git --version # 'git version 2.39.2' 00:00:00.333 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.369 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.369 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.884 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.900 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.919 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.919 > git config core.sparsecheckout # timeout=10 00:00:05.939 > git read-tree -mu HEAD # timeout=10 00:00:05.957 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.982 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.982 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.115 [Pipeline] Start of Pipeline 00:00:06.127 [Pipeline] library 00:00:06.129 Loading library shm_lib@master 00:00:06.129 Library shm_lib@master is cached. Copying from home. 00:00:06.143 [Pipeline] node 00:00:06.155 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.157 [Pipeline] { 00:00:06.164 [Pipeline] catchError 00:00:06.165 [Pipeline] { 00:00:06.174 [Pipeline] wrap 00:00:06.182 [Pipeline] { 00:00:06.188 [Pipeline] stage 00:00:06.190 [Pipeline] { (Prologue) 00:00:06.204 [Pipeline] echo 00:00:06.206 Node: VM-host-WFP1 00:00:06.213 [Pipeline] cleanWs 00:00:06.222 [WS-CLEANUP] Deleting project workspace... 00:00:06.222 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.228 [WS-CLEANUP] done 00:00:06.433 [Pipeline] setCustomBuildProperty 00:00:06.500 [Pipeline] httpRequest 00:00:07.368 [Pipeline] echo 00:00:07.369 Sorcerer 10.211.164.20 is alive 00:00:07.376 [Pipeline] retry 00:00:07.377 [Pipeline] { 00:00:07.388 [Pipeline] httpRequest 00:00:07.392 HttpMethod: GET 00:00:07.392 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.393 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.395 Response Code: HTTP/1.1 200 OK 00:00:07.395 Success: Status code 200 is in the accepted range: 200,404 00:00:07.395 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.413 [Pipeline] } 00:00:08.431 [Pipeline] // retry 00:00:08.438 [Pipeline] sh 00:00:08.725 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.736 [Pipeline] httpRequest 00:00:09.118 [Pipeline] echo 00:00:09.119 Sorcerer 10.211.164.20 is alive 00:00:09.127 [Pipeline] retry 00:00:09.129 [Pipeline] { 00:00:09.139 [Pipeline] httpRequest 00:00:09.143 HttpMethod: GET 00:00:09.143 URL: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:09.143 Sending request to url: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:09.165 Response Code: HTTP/1.1 200 OK 00:00:09.166 Success: Status code 200 is in the accepted range: 200,404 00:00:09.166 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:42.087 [Pipeline] } 00:01:42.105 [Pipeline] // retry 00:01:42.112 [Pipeline] sh 00:01:42.397 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:44.945 [Pipeline] sh 00:01:45.228 + git -C spdk log --oneline -n5 00:01:45.228 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:45.228 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:45.228 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:45.228 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:45.228 60adca7e1 lib/mlx5: API to configure UMR 00:01:45.246 [Pipeline] writeFile 00:01:45.261 [Pipeline] sh 00:01:45.583 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:45.619 [Pipeline] sh 00:01:45.901 + cat autorun-spdk.conf 00:01:45.901 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.901 SPDK_TEST_NVME=1 00:01:45.901 SPDK_TEST_FTL=1 00:01:45.901 SPDK_TEST_ISAL=1 00:01:45.901 SPDK_RUN_ASAN=1 00:01:45.901 SPDK_RUN_UBSAN=1 00:01:45.901 SPDK_TEST_XNVME=1 00:01:45.901 SPDK_TEST_NVME_FDP=1 00:01:45.901 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.909 RUN_NIGHTLY=1 00:01:45.913 [Pipeline] } 00:01:45.955 [Pipeline] // stage 00:01:45.964 [Pipeline] stage 00:01:45.966 [Pipeline] { (Run VM) 00:01:45.974 [Pipeline] sh 00:01:46.252 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.252 + echo 'Start stage prepare_nvme.sh' 00:01:46.252 Start stage prepare_nvme.sh 00:01:46.252 + [[ -n 2 ]] 00:01:46.252 + disk_prefix=ex2 00:01:46.252 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:46.252 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:46.252 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:46.252 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.252 ++ SPDK_TEST_NVME=1 00:01:46.252 ++ SPDK_TEST_FTL=1 00:01:46.252 ++ SPDK_TEST_ISAL=1 00:01:46.252 ++ SPDK_RUN_ASAN=1 00:01:46.252 ++ SPDK_RUN_UBSAN=1 00:01:46.252 ++ SPDK_TEST_XNVME=1 00:01:46.252 ++ SPDK_TEST_NVME_FDP=1 00:01:46.252 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.252 ++ RUN_NIGHTLY=1 00:01:46.252 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:46.252 + nvme_files=() 00:01:46.252 + declare -A nvme_files 00:01:46.252 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.252 + nvme_files['nvme.img']=5G 00:01:46.252 + nvme_files['nvme-cmb.img']=5G 00:01:46.252 + nvme_files['nvme-multi0.img']=4G 00:01:46.252 + nvme_files['nvme-multi1.img']=4G 00:01:46.252 + nvme_files['nvme-multi2.img']=4G 00:01:46.252 + nvme_files['nvme-openstack.img']=8G 00:01:46.252 + nvme_files['nvme-zns.img']=5G 00:01:46.252 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.252 + (( SPDK_TEST_FTL == 1 )) 00:01:46.252 + nvme_files["nvme-ftl.img"]=6G 00:01:46.252 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.252 + nvme_files["nvme-fdp.img"]=1G 00:01:46.252 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.252 + for nvme in "${!nvme_files[@]}" 00:01:46.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:46.252 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.252 + for nvme in "${!nvme_files[@]}" 00:01:46.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:46.252 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:46.252 + for nvme in "${!nvme_files[@]}" 00:01:46.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:46.252 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.252 + for nvme in "${!nvme_files[@]}" 00:01:46.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:46.252 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.252 + for nvme in "${!nvme_files[@]}" 00:01:46.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:46.252 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.252 + for nvme in "${!nvme_files[@]}" 00:01:46.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:46.510 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.510 + for nvme in "${!nvme_files[@]}" 00:01:46.510 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:46.510 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.510 + for nvme in "${!nvme_files[@]}" 00:01:46.510 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:46.510 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:46.510 + for nvme in "${!nvme_files[@]}" 00:01:46.510 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:46.510 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.510 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:46.510 + echo 'End stage prepare_nvme.sh' 00:01:46.510 End stage prepare_nvme.sh 00:01:46.522 [Pipeline] sh 00:01:46.809 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:46.809 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:46.809 00:01:46.809 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:46.809 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:46.810 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:46.810 HELP=0 00:01:46.810 DRY_RUN=0 00:01:46.810 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:46.810 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:46.810 NVME_AUTO_CREATE=0 00:01:46.810 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:46.810 NVME_CMB=,,,, 00:01:46.810 NVME_PMR=,,,, 00:01:46.810 NVME_ZNS=,,,, 00:01:46.810 NVME_MS=true,,,, 00:01:46.810 NVME_FDP=,,,on, 00:01:46.810 SPDK_VAGRANT_DISTRO=fedora39 00:01:46.810 SPDK_VAGRANT_VMCPU=10 00:01:46.810 SPDK_VAGRANT_VMRAM=12288 00:01:46.810 SPDK_VAGRANT_PROVIDER=libvirt 00:01:46.810 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:46.810 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:46.810 SPDK_OPENSTACK_NETWORK=0 00:01:46.810 VAGRANT_PACKAGE_BOX=0 00:01:46.810 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:46.810 FORCE_DISTRO=true 00:01:46.810 VAGRANT_BOX_VERSION= 00:01:46.810 EXTRA_VAGRANTFILES= 00:01:46.810 NIC_MODEL=e1000 00:01:46.810 00:01:46.810 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:46.810 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:49.347 Bringing machine 'default' up with 'libvirt' provider... 00:01:50.727 ==> default: Creating image (snapshot of base box volume). 00:01:50.727 ==> default: Creating domain with the following settings... 00:01:50.727 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733566429_6cff08aac99294153046 00:01:50.727 ==> default: -- Domain type: kvm 00:01:50.727 ==> default: -- Cpus: 10 00:01:50.727 ==> default: -- Feature: acpi 00:01:50.727 ==> default: -- Feature: apic 00:01:50.727 ==> default: -- Feature: pae 00:01:50.727 ==> default: -- Memory: 12288M 00:01:50.727 ==> default: -- Memory Backing: hugepages: 00:01:50.727 ==> default: -- Management MAC: 00:01:50.727 ==> default: -- Loader: 00:01:50.727 ==> default: -- Nvram: 00:01:50.727 ==> default: -- Base box: spdk/fedora39 00:01:50.727 ==> default: -- Storage pool: default 00:01:50.727 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733566429_6cff08aac99294153046.img (20G) 00:01:50.727 ==> default: -- Volume Cache: default 00:01:50.727 ==> default: -- Kernel: 00:01:50.727 ==> default: -- Initrd: 00:01:50.727 ==> default: -- Graphics Type: vnc 00:01:50.727 ==> default: -- Graphics Port: -1 00:01:50.727 ==> default: -- Graphics IP: 127.0.0.1 00:01:50.727 ==> default: -- Graphics Password: Not defined 00:01:50.727 ==> default: -- Video Type: cirrus 00:01:50.727 ==> default: -- Video VRAM: 9216 00:01:50.727 ==> default: -- Sound Type: 00:01:50.727 ==> default: -- Keymap: en-us 00:01:50.727 ==> default: -- TPM Path: 00:01:50.727 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:50.727 ==> default: -- Command line args: 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:50.727 ==> default: -> value=-drive, 00:01:50.727 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:50.727 ==> default: -> value=-drive, 00:01:50.727 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:50.727 ==> default: -> value=-drive, 00:01:50.727 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.727 ==> default: -> value=-drive, 00:01:50.727 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.727 ==> default: -> value=-drive, 00:01:50.727 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:50.727 ==> default: -> value=-device, 00:01:50.727 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.727 ==> default: -> value=-device, 00:01:50.728 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:50.728 ==> default: -> value=-device, 00:01:50.728 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:50.728 ==> default: -> value=-drive, 00:01:50.728 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:50.728 ==> default: -> value=-device, 00:01:50.728 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.986 ==> default: Creating shared folders metadata... 00:01:50.986 ==> default: Starting domain. 00:01:52.894 ==> default: Waiting for domain to get an IP address... 00:02:10.983 ==> default: Waiting for SSH to become available... 00:02:10.983 ==> default: Configuring and enabling network interfaces... 00:02:15.179 default: SSH address: 192.168.121.121:22 00:02:15.179 default: SSH username: vagrant 00:02:15.179 default: SSH auth method: private key 00:02:17.713 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:27.707 ==> default: Mounting SSHFS shared folder... 00:02:28.645 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:28.645 ==> default: Checking Mount.. 00:02:30.550 ==> default: Folder Successfully Mounted! 00:02:30.550 ==> default: Running provisioner: file... 00:02:31.487 default: ~/.gitconfig => .gitconfig 00:02:32.056 00:02:32.056 SUCCESS! 00:02:32.056 00:02:32.056 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:32.056 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:32.056 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:32.056 00:02:32.066 [Pipeline] } 00:02:32.080 [Pipeline] // stage 00:02:32.090 [Pipeline] dir 00:02:32.090 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:32.092 [Pipeline] { 00:02:32.104 [Pipeline] catchError 00:02:32.106 [Pipeline] { 00:02:32.119 [Pipeline] sh 00:02:32.401 + vagrant ssh-config --host vagrant 00:02:32.401 + sed -ne /^Host/,$p 00:02:32.401 + tee ssh_conf 00:02:34.933 Host vagrant 00:02:34.933 HostName 192.168.121.121 00:02:34.933 User vagrant 00:02:34.933 Port 22 00:02:34.933 UserKnownHostsFile /dev/null 00:02:34.933 StrictHostKeyChecking no 00:02:34.933 PasswordAuthentication no 00:02:34.933 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:34.933 IdentitiesOnly yes 00:02:34.933 LogLevel FATAL 00:02:34.933 ForwardAgent yes 00:02:34.933 ForwardX11 yes 00:02:34.933 00:02:34.947 [Pipeline] withEnv 00:02:34.949 [Pipeline] { 00:02:34.962 [Pipeline] sh 00:02:35.241 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:35.241 source /etc/os-release 00:02:35.241 [[ -e /image.version ]] && img=$(< /image.version) 00:02:35.241 # Minimal, systemd-like check. 00:02:35.241 if [[ -e /.dockerenv ]]; then 00:02:35.241 # Clear garbage from the node's name: 00:02:35.241 # agt-er_autotest_547-896 -> autotest_547-896 00:02:35.241 # $HOSTNAME is the actual container id 00:02:35.241 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:35.241 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:35.241 # We can assume this is a mount from a host where container is running, 00:02:35.241 # so fetch its hostname to easily identify the target swarm worker. 00:02:35.241 container="$(< /etc/hostname) ($agent)" 00:02:35.241 else 00:02:35.241 # Fallback 00:02:35.241 container=$agent 00:02:35.241 fi 00:02:35.241 fi 00:02:35.241 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:35.241 00:02:35.511 [Pipeline] } 00:02:35.520 [Pipeline] // withEnv 00:02:35.526 [Pipeline] setCustomBuildProperty 00:02:35.533 [Pipeline] stage 00:02:35.534 [Pipeline] { (Tests) 00:02:35.544 [Pipeline] sh 00:02:35.822 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:36.092 [Pipeline] sh 00:02:36.364 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:36.636 [Pipeline] timeout 00:02:36.637 Timeout set to expire in 50 min 00:02:36.638 [Pipeline] { 00:02:36.651 [Pipeline] sh 00:02:36.931 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:37.498 HEAD is now at a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:02:37.511 [Pipeline] sh 00:02:37.793 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:38.066 [Pipeline] sh 00:02:38.348 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:38.622 [Pipeline] sh 00:02:38.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:39.160 ++ readlink -f spdk_repo 00:02:39.160 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:39.160 + [[ -n /home/vagrant/spdk_repo ]] 00:02:39.160 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:39.160 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:39.160 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:39.160 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:39.160 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:39.160 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:39.160 + cd /home/vagrant/spdk_repo 00:02:39.160 + source /etc/os-release 00:02:39.160 ++ NAME='Fedora Linux' 00:02:39.160 ++ VERSION='39 (Cloud Edition)' 00:02:39.160 ++ ID=fedora 00:02:39.160 ++ VERSION_ID=39 00:02:39.160 ++ VERSION_CODENAME= 00:02:39.160 ++ PLATFORM_ID=platform:f39 00:02:39.160 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:39.160 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:39.160 ++ LOGO=fedora-logo-icon 00:02:39.160 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:39.160 ++ HOME_URL=https://fedoraproject.org/ 00:02:39.160 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:39.160 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:39.160 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:39.160 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:39.160 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:39.160 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:39.160 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:39.160 ++ SUPPORT_END=2024-11-12 00:02:39.160 ++ VARIANT='Cloud Edition' 00:02:39.160 ++ VARIANT_ID=cloud 00:02:39.160 + uname -a 00:02:39.160 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:39.160 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:39.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:39.986 Hugepages 00:02:39.986 node hugesize free / total 00:02:39.986 node0 1048576kB 0 / 0 00:02:39.986 node0 2048kB 0 / 0 00:02:39.986 00:02:39.986 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:39.986 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:39.986 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:39.986 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:39.986 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:40.246 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:40.246 + rm -f /tmp/spdk-ld-path 00:02:40.246 + source autorun-spdk.conf 00:02:40.246 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.246 ++ SPDK_TEST_NVME=1 00:02:40.246 ++ SPDK_TEST_FTL=1 00:02:40.246 ++ SPDK_TEST_ISAL=1 00:02:40.246 ++ SPDK_RUN_ASAN=1 00:02:40.246 ++ SPDK_RUN_UBSAN=1 00:02:40.246 ++ SPDK_TEST_XNVME=1 00:02:40.246 ++ SPDK_TEST_NVME_FDP=1 00:02:40.246 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.246 ++ RUN_NIGHTLY=1 00:02:40.246 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:40.246 + [[ -n '' ]] 00:02:40.246 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:40.246 + for M in /var/spdk/build-*-manifest.txt 00:02:40.246 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:40.246 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.246 + for M in /var/spdk/build-*-manifest.txt 00:02:40.246 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:40.246 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.246 + for M in /var/spdk/build-*-manifest.txt 00:02:40.246 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.246 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.246 ++ uname 00:02:40.246 + [[ Linux == \L\i\n\u\x ]] 00:02:40.246 + sudo dmesg -T 00:02:40.246 + sudo dmesg --clear 00:02:40.246 + dmesg_pid=5251 00:02:40.246 + [[ Fedora Linux == FreeBSD ]] 00:02:40.246 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.246 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.246 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.246 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.246 + sudo dmesg -Tw 00:02:40.246 + export FIO_BIN=/usr/src/fio-static/fio 00:02:40.246 + FIO_BIN=/usr/src/fio-static/fio 00:02:40.246 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.246 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.246 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.246 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.246 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.246 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.246 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.246 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.246 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.506 10:14:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:40.506 10:14:39 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.506 10:14:39 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:40.506 10:14:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:40.506 10:14:39 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.506 10:14:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:40.506 10:14:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:40.506 10:14:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:40.506 10:14:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:40.506 10:14:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.506 10:14:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.506 10:14:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.506 10:14:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.506 10:14:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.506 10:14:39 -- paths/export.sh@5 -- $ export PATH 00:02:40.506 10:14:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.506 10:14:39 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:40.506 10:14:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:40.506 10:14:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733566479.XXXXXX 00:02:40.506 10:14:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733566479.sjNF8p 00:02:40.506 10:14:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:40.506 10:14:39 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:40.506 10:14:39 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:40.506 10:14:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:40.506 10:14:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:40.506 10:14:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:40.506 10:14:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:40.506 10:14:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.506 10:14:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:40.506 10:14:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:40.506 10:14:39 -- pm/common@17 -- $ local monitor 00:02:40.506 10:14:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.506 10:14:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.506 10:14:39 -- pm/common@25 -- $ sleep 1 00:02:40.506 10:14:39 -- pm/common@21 -- $ date +%s 00:02:40.506 10:14:39 -- pm/common@21 -- $ date +%s 00:02:40.506 10:14:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733566479 00:02:40.506 10:14:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733566479 00:02:40.506 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733566479_collect-cpu-load.pm.log 00:02:40.506 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733566479_collect-vmstat.pm.log 00:02:41.887 10:14:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:41.887 10:14:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:41.887 10:14:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:41.887 10:14:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:41.887 10:14:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:41.887 Sat Dec 7 10:14:40 AM UTC 2024 00:02:41.887 10:14:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:41.887 v25.01-pre-311-ga2f5e1c2d 00:02:41.887 10:14:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:41.887 10:14:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:41.887 10:14:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:41.887 10:14:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:41.887 10:14:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.887 ************************************ 00:02:41.887 START TEST asan 00:02:41.887 ************************************ 00:02:41.887 using asan 00:02:41.887 10:14:40 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:41.887 00:02:41.887 real 0m0.001s 00:02:41.887 user 0m0.000s 00:02:41.887 sys 0m0.000s 00:02:41.887 10:14:40 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:41.887 10:14:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:41.887 ************************************ 00:02:41.887 END TEST asan 00:02:41.887 ************************************ 00:02:41.887 10:14:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:41.887 10:14:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:41.887 10:14:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:41.887 10:14:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:41.887 10:14:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.887 ************************************ 00:02:41.887 START TEST ubsan 00:02:41.887 ************************************ 00:02:41.887 using ubsan 00:02:41.887 10:14:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:41.887 00:02:41.887 real 0m0.000s 00:02:41.887 user 0m0.000s 00:02:41.887 sys 0m0.000s 00:02:41.887 10:14:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:41.887 10:14:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:41.887 ************************************ 00:02:41.887 END TEST ubsan 00:02:41.887 ************************************ 00:02:41.887 10:14:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:41.887 10:14:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.887 10:14:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.887 10:14:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.887 10:14:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.887 10:14:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.887 10:14:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.887 10:14:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.887 10:14:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:41.887 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:41.887 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.455 Using 'verbs' RDMA provider 00:02:58.719 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:16.835 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:16.835 Creating mk/config.mk...done. 00:03:16.835 Creating mk/cc.flags.mk...done. 00:03:16.835 Type 'make' to build. 00:03:16.835 10:15:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:16.835 10:15:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.835 10:15:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.835 10:15:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.835 ************************************ 00:03:16.835 START TEST make 00:03:16.835 ************************************ 00:03:16.835 10:15:14 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:16.835 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:16.835 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:16.835 meson setup builddir \ 00:03:16.835 -Dwith-libaio=enabled \ 00:03:16.835 -Dwith-liburing=enabled \ 00:03:16.835 -Dwith-libvfn=disabled \ 00:03:16.835 -Dwith-spdk=disabled \ 00:03:16.835 -Dexamples=false \ 00:03:16.835 -Dtests=false \ 00:03:16.835 -Dtools=false && \ 00:03:16.835 meson compile -C builddir && \ 00:03:16.835 cd -) 00:03:16.835 make[1]: Nothing to be done for 'all'. 00:03:17.817 The Meson build system 00:03:17.817 Version: 1.5.0 00:03:17.817 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:17.817 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:17.817 Build type: native build 00:03:17.817 Project name: xnvme 00:03:17.817 Project version: 0.7.5 00:03:17.817 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:17.817 C linker for the host machine: cc ld.bfd 2.40-14 00:03:17.817 Host machine cpu family: x86_64 00:03:17.817 Host machine cpu: x86_64 00:03:17.817 Message: host_machine.system: linux 00:03:17.817 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:17.817 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:17.817 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:17.817 Run-time dependency threads found: YES 00:03:17.817 Has header "setupapi.h" : NO 00:03:17.817 Has header "linux/blkzoned.h" : YES 00:03:17.817 Has header "linux/blkzoned.h" : YES (cached) 00:03:17.817 Has header "libaio.h" : YES 00:03:17.817 Library aio found: YES 00:03:17.817 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:17.817 Run-time dependency liburing found: YES 2.2 00:03:17.817 Dependency libvfn skipped: feature with-libvfn disabled 00:03:17.817 Found CMake: /usr/bin/cmake (3.27.7) 00:03:17.817 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:17.817 Subproject spdk : skipped: feature with-spdk disabled 00:03:17.817 Run-time dependency appleframeworks found: NO (tried framework) 00:03:17.817 Run-time dependency appleframeworks found: NO (tried framework) 00:03:17.817 Library rt found: YES 00:03:17.817 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:17.817 Configuring xnvme_config.h using configuration 00:03:17.817 Configuring xnvme.spec using configuration 00:03:17.817 Run-time dependency bash-completion found: YES 2.11 00:03:17.817 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:17.817 Program cp found: YES (/usr/bin/cp) 00:03:17.817 Build targets in project: 3 00:03:17.817 00:03:17.817 xnvme 0.7.5 00:03:17.817 00:03:17.817 Subprojects 00:03:17.817 spdk : NO Feature 'with-spdk' disabled 00:03:17.817 00:03:17.817 User defined options 00:03:17.817 examples : false 00:03:17.817 tests : false 00:03:17.817 tools : false 00:03:17.817 with-libaio : enabled 00:03:17.817 with-liburing: enabled 00:03:17.817 with-libvfn : disabled 00:03:17.817 with-spdk : disabled 00:03:17.817 00:03:17.817 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:18.079 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:18.079 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:18.338 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:18.338 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:18.338 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:18.338 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:18.338 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:18.338 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:18.338 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:18.338 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:18.338 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:18.338 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:18.338 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:18.338 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:18.338 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:18.338 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:18.338 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:18.338 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:18.338 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:18.338 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:18.338 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:18.338 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:18.338 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:18.597 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:18.597 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:18.597 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:18.597 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:18.597 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:18.597 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:18.597 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:18.597 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:18.597 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:18.597 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:18.597 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:18.597 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:18.597 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:18.597 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:18.597 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:18.597 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:18.597 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:18.597 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:18.597 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:18.597 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:18.597 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:18.597 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:18.597 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:18.597 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:18.597 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:18.597 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:18.597 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:18.597 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:18.597 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:18.597 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:18.597 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:18.597 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:18.597 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:18.597 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:18.855 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:18.855 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:18.855 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:18.855 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:18.855 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:18.855 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:18.855 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:18.855 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:18.855 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:18.855 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:18.855 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:18.855 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:18.855 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:18.855 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:18.855 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:18.855 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:19.114 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:19.372 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:19.372 [75/76] Linking static target lib/libxnvme.a 00:03:19.372 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:19.372 INFO: autodetecting backend as ninja 00:03:19.372 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:19.372 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:27.524 The Meson build system 00:03:27.524 Version: 1.5.0 00:03:27.524 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:27.524 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:27.524 Build type: native build 00:03:27.524 Program cat found: YES (/usr/bin/cat) 00:03:27.524 Project name: DPDK 00:03:27.524 Project version: 24.03.0 00:03:27.524 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:27.524 C linker for the host machine: cc ld.bfd 2.40-14 00:03:27.524 Host machine cpu family: x86_64 00:03:27.524 Host machine cpu: x86_64 00:03:27.524 Message: ## Building in Developer Mode ## 00:03:27.524 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:27.524 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:27.524 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:27.524 Program python3 found: YES (/usr/bin/python3) 00:03:27.524 Program cat found: YES (/usr/bin/cat) 00:03:27.524 Compiler for C supports arguments -march=native: YES 00:03:27.524 Checking for size of "void *" : 8 00:03:27.524 Checking for size of "void *" : 8 (cached) 00:03:27.524 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:27.524 Library m found: YES 00:03:27.524 Library numa found: YES 00:03:27.524 Has header "numaif.h" : YES 00:03:27.524 Library fdt found: NO 00:03:27.524 Library execinfo found: NO 00:03:27.524 Has header "execinfo.h" : YES 00:03:27.524 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:27.524 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:27.524 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:27.524 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:27.524 Run-time dependency openssl found: YES 3.1.1 00:03:27.524 Run-time dependency libpcap found: YES 1.10.4 00:03:27.524 Has header "pcap.h" with dependency libpcap: YES 00:03:27.524 Compiler for C supports arguments -Wcast-qual: YES 00:03:27.524 Compiler for C supports arguments -Wdeprecated: YES 00:03:27.524 Compiler for C supports arguments -Wformat: YES 00:03:27.524 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:27.524 Compiler for C supports arguments -Wformat-security: NO 00:03:27.524 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:27.524 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:27.524 Compiler for C supports arguments -Wnested-externs: YES 00:03:27.524 Compiler for C supports arguments -Wold-style-definition: YES 00:03:27.524 Compiler for C supports arguments -Wpointer-arith: YES 00:03:27.524 Compiler for C supports arguments -Wsign-compare: YES 00:03:27.524 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:27.524 Compiler for C supports arguments -Wundef: YES 00:03:27.524 Compiler for C supports arguments -Wwrite-strings: YES 00:03:27.524 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:27.524 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:27.524 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:27.524 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:27.524 Program objdump found: YES (/usr/bin/objdump) 00:03:27.524 Compiler for C supports arguments -mavx512f: YES 00:03:27.524 Checking if "AVX512 checking" compiles: YES 00:03:27.524 Fetching value of define "__SSE4_2__" : 1 00:03:27.524 Fetching value of define "__AES__" : 1 00:03:27.524 Fetching value of define "__AVX__" : 1 00:03:27.524 Fetching value of define "__AVX2__" : 1 00:03:27.524 Fetching value of define "__AVX512BW__" : 1 00:03:27.524 Fetching value of define "__AVX512CD__" : 1 00:03:27.524 Fetching value of define "__AVX512DQ__" : 1 00:03:27.524 Fetching value of define "__AVX512F__" : 1 00:03:27.524 Fetching value of define "__AVX512VL__" : 1 00:03:27.524 Fetching value of define "__PCLMUL__" : 1 00:03:27.524 Fetching value of define "__RDRND__" : 1 00:03:27.524 Fetching value of define "__RDSEED__" : 1 00:03:27.524 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:27.524 Fetching value of define "__znver1__" : (undefined) 00:03:27.524 Fetching value of define "__znver2__" : (undefined) 00:03:27.524 Fetching value of define "__znver3__" : (undefined) 00:03:27.524 Fetching value of define "__znver4__" : (undefined) 00:03:27.524 Library asan found: YES 00:03:27.524 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:27.524 Message: lib/log: Defining dependency "log" 00:03:27.524 Message: lib/kvargs: Defining dependency "kvargs" 00:03:27.524 Message: lib/telemetry: Defining dependency "telemetry" 00:03:27.524 Library rt found: YES 00:03:27.524 Checking for function "getentropy" : NO 00:03:27.524 Message: lib/eal: Defining dependency "eal" 00:03:27.524 Message: lib/ring: Defining dependency "ring" 00:03:27.524 Message: lib/rcu: Defining dependency "rcu" 00:03:27.524 Message: lib/mempool: Defining dependency "mempool" 00:03:27.524 Message: lib/mbuf: Defining dependency "mbuf" 00:03:27.524 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:27.524 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:27.524 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:27.524 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:27.524 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:27.524 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:27.524 Compiler for C supports arguments -mpclmul: YES 00:03:27.524 Compiler for C supports arguments -maes: YES 00:03:27.524 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:27.524 Compiler for C supports arguments -mavx512bw: YES 00:03:27.524 Compiler for C supports arguments -mavx512dq: YES 00:03:27.524 Compiler for C supports arguments -mavx512vl: YES 00:03:27.524 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:27.524 Compiler for C supports arguments -mavx2: YES 00:03:27.524 Compiler for C supports arguments -mavx: YES 00:03:27.524 Message: lib/net: Defining dependency "net" 00:03:27.524 Message: lib/meter: Defining dependency "meter" 00:03:27.524 Message: lib/ethdev: Defining dependency "ethdev" 00:03:27.524 Message: lib/pci: Defining dependency "pci" 00:03:27.524 Message: lib/cmdline: Defining dependency "cmdline" 00:03:27.524 Message: lib/hash: Defining dependency "hash" 00:03:27.524 Message: lib/timer: Defining dependency "timer" 00:03:27.524 Message: lib/compressdev: Defining dependency "compressdev" 00:03:27.524 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:27.524 Message: lib/dmadev: Defining dependency "dmadev" 00:03:27.524 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:27.524 Message: lib/power: Defining dependency "power" 00:03:27.524 Message: lib/reorder: Defining dependency "reorder" 00:03:27.524 Message: lib/security: Defining dependency "security" 00:03:27.524 Has header "linux/userfaultfd.h" : YES 00:03:27.524 Has header "linux/vduse.h" : YES 00:03:27.524 Message: lib/vhost: Defining dependency "vhost" 00:03:27.524 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:27.524 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:27.524 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:27.524 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:27.524 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:27.524 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:27.524 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:27.524 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:27.524 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:27.524 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:27.524 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:27.524 Configuring doxy-api-html.conf using configuration 00:03:27.524 Configuring doxy-api-man.conf using configuration 00:03:27.524 Program mandb found: YES (/usr/bin/mandb) 00:03:27.524 Program sphinx-build found: NO 00:03:27.524 Configuring rte_build_config.h using configuration 00:03:27.524 Message: 00:03:27.524 ================= 00:03:27.524 Applications Enabled 00:03:27.524 ================= 00:03:27.524 00:03:27.524 apps: 00:03:27.524 00:03:27.524 00:03:27.524 Message: 00:03:27.524 ================= 00:03:27.524 Libraries Enabled 00:03:27.524 ================= 00:03:27.524 00:03:27.524 libs: 00:03:27.524 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:27.524 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:27.524 cryptodev, dmadev, power, reorder, security, vhost, 00:03:27.524 00:03:27.524 Message: 00:03:27.524 =============== 00:03:27.524 Drivers Enabled 00:03:27.524 =============== 00:03:27.524 00:03:27.524 common: 00:03:27.524 00:03:27.524 bus: 00:03:27.524 pci, vdev, 00:03:27.524 mempool: 00:03:27.524 ring, 00:03:27.524 dma: 00:03:27.524 00:03:27.524 net: 00:03:27.524 00:03:27.525 crypto: 00:03:27.525 00:03:27.525 compress: 00:03:27.525 00:03:27.525 vdpa: 00:03:27.525 00:03:27.525 00:03:27.525 Message: 00:03:27.525 ================= 00:03:27.525 Content Skipped 00:03:27.525 ================= 00:03:27.525 00:03:27.525 apps: 00:03:27.525 dumpcap: explicitly disabled via build config 00:03:27.525 graph: explicitly disabled via build config 00:03:27.525 pdump: explicitly disabled via build config 00:03:27.525 proc-info: explicitly disabled via build config 00:03:27.525 test-acl: explicitly disabled via build config 00:03:27.525 test-bbdev: explicitly disabled via build config 00:03:27.525 test-cmdline: explicitly disabled via build config 00:03:27.525 test-compress-perf: explicitly disabled via build config 00:03:27.525 test-crypto-perf: explicitly disabled via build config 00:03:27.525 test-dma-perf: explicitly disabled via build config 00:03:27.525 test-eventdev: explicitly disabled via build config 00:03:27.525 test-fib: explicitly disabled via build config 00:03:27.525 test-flow-perf: explicitly disabled via build config 00:03:27.525 test-gpudev: explicitly disabled via build config 00:03:27.525 test-mldev: explicitly disabled via build config 00:03:27.525 test-pipeline: explicitly disabled via build config 00:03:27.525 test-pmd: explicitly disabled via build config 00:03:27.525 test-regex: explicitly disabled via build config 00:03:27.525 test-sad: explicitly disabled via build config 00:03:27.525 test-security-perf: explicitly disabled via build config 00:03:27.525 00:03:27.525 libs: 00:03:27.525 argparse: explicitly disabled via build config 00:03:27.525 metrics: explicitly disabled via build config 00:03:27.525 acl: explicitly disabled via build config 00:03:27.525 bbdev: explicitly disabled via build config 00:03:27.525 bitratestats: explicitly disabled via build config 00:03:27.525 bpf: explicitly disabled via build config 00:03:27.525 cfgfile: explicitly disabled via build config 00:03:27.525 distributor: explicitly disabled via build config 00:03:27.525 efd: explicitly disabled via build config 00:03:27.525 eventdev: explicitly disabled via build config 00:03:27.525 dispatcher: explicitly disabled via build config 00:03:27.525 gpudev: explicitly disabled via build config 00:03:27.525 gro: explicitly disabled via build config 00:03:27.525 gso: explicitly disabled via build config 00:03:27.525 ip_frag: explicitly disabled via build config 00:03:27.525 jobstats: explicitly disabled via build config 00:03:27.525 latencystats: explicitly disabled via build config 00:03:27.525 lpm: explicitly disabled via build config 00:03:27.525 member: explicitly disabled via build config 00:03:27.525 pcapng: explicitly disabled via build config 00:03:27.525 rawdev: explicitly disabled via build config 00:03:27.525 regexdev: explicitly disabled via build config 00:03:27.525 mldev: explicitly disabled via build config 00:03:27.525 rib: explicitly disabled via build config 00:03:27.525 sched: explicitly disabled via build config 00:03:27.525 stack: explicitly disabled via build config 00:03:27.525 ipsec: explicitly disabled via build config 00:03:27.525 pdcp: explicitly disabled via build config 00:03:27.525 fib: explicitly disabled via build config 00:03:27.525 port: explicitly disabled via build config 00:03:27.525 pdump: explicitly disabled via build config 00:03:27.525 table: explicitly disabled via build config 00:03:27.525 pipeline: explicitly disabled via build config 00:03:27.525 graph: explicitly disabled via build config 00:03:27.525 node: explicitly disabled via build config 00:03:27.525 00:03:27.525 drivers: 00:03:27.525 common/cpt: not in enabled drivers build config 00:03:27.525 common/dpaax: not in enabled drivers build config 00:03:27.525 common/iavf: not in enabled drivers build config 00:03:27.525 common/idpf: not in enabled drivers build config 00:03:27.525 common/ionic: not in enabled drivers build config 00:03:27.525 common/mvep: not in enabled drivers build config 00:03:27.525 common/octeontx: not in enabled drivers build config 00:03:27.525 bus/auxiliary: not in enabled drivers build config 00:03:27.525 bus/cdx: not in enabled drivers build config 00:03:27.525 bus/dpaa: not in enabled drivers build config 00:03:27.525 bus/fslmc: not in enabled drivers build config 00:03:27.525 bus/ifpga: not in enabled drivers build config 00:03:27.525 bus/platform: not in enabled drivers build config 00:03:27.525 bus/uacce: not in enabled drivers build config 00:03:27.525 bus/vmbus: not in enabled drivers build config 00:03:27.525 common/cnxk: not in enabled drivers build config 00:03:27.525 common/mlx5: not in enabled drivers build config 00:03:27.525 common/nfp: not in enabled drivers build config 00:03:27.525 common/nitrox: not in enabled drivers build config 00:03:27.525 common/qat: not in enabled drivers build config 00:03:27.525 common/sfc_efx: not in enabled drivers build config 00:03:27.525 mempool/bucket: not in enabled drivers build config 00:03:27.525 mempool/cnxk: not in enabled drivers build config 00:03:27.525 mempool/dpaa: not in enabled drivers build config 00:03:27.525 mempool/dpaa2: not in enabled drivers build config 00:03:27.525 mempool/octeontx: not in enabled drivers build config 00:03:27.525 mempool/stack: not in enabled drivers build config 00:03:27.525 dma/cnxk: not in enabled drivers build config 00:03:27.525 dma/dpaa: not in enabled drivers build config 00:03:27.525 dma/dpaa2: not in enabled drivers build config 00:03:27.525 dma/hisilicon: not in enabled drivers build config 00:03:27.525 dma/idxd: not in enabled drivers build config 00:03:27.525 dma/ioat: not in enabled drivers build config 00:03:27.525 dma/skeleton: not in enabled drivers build config 00:03:27.525 net/af_packet: not in enabled drivers build config 00:03:27.525 net/af_xdp: not in enabled drivers build config 00:03:27.525 net/ark: not in enabled drivers build config 00:03:27.525 net/atlantic: not in enabled drivers build config 00:03:27.525 net/avp: not in enabled drivers build config 00:03:27.525 net/axgbe: not in enabled drivers build config 00:03:27.525 net/bnx2x: not in enabled drivers build config 00:03:27.525 net/bnxt: not in enabled drivers build config 00:03:27.525 net/bonding: not in enabled drivers build config 00:03:27.525 net/cnxk: not in enabled drivers build config 00:03:27.525 net/cpfl: not in enabled drivers build config 00:03:27.525 net/cxgbe: not in enabled drivers build config 00:03:27.525 net/dpaa: not in enabled drivers build config 00:03:27.525 net/dpaa2: not in enabled drivers build config 00:03:27.525 net/e1000: not in enabled drivers build config 00:03:27.525 net/ena: not in enabled drivers build config 00:03:27.525 net/enetc: not in enabled drivers build config 00:03:27.525 net/enetfec: not in enabled drivers build config 00:03:27.525 net/enic: not in enabled drivers build config 00:03:27.525 net/failsafe: not in enabled drivers build config 00:03:27.525 net/fm10k: not in enabled drivers build config 00:03:27.525 net/gve: not in enabled drivers build config 00:03:27.525 net/hinic: not in enabled drivers build config 00:03:27.525 net/hns3: not in enabled drivers build config 00:03:27.525 net/i40e: not in enabled drivers build config 00:03:27.525 net/iavf: not in enabled drivers build config 00:03:27.525 net/ice: not in enabled drivers build config 00:03:27.525 net/idpf: not in enabled drivers build config 00:03:27.525 net/igc: not in enabled drivers build config 00:03:27.525 net/ionic: not in enabled drivers build config 00:03:27.525 net/ipn3ke: not in enabled drivers build config 00:03:27.525 net/ixgbe: not in enabled drivers build config 00:03:27.525 net/mana: not in enabled drivers build config 00:03:27.525 net/memif: not in enabled drivers build config 00:03:27.525 net/mlx4: not in enabled drivers build config 00:03:27.525 net/mlx5: not in enabled drivers build config 00:03:27.525 net/mvneta: not in enabled drivers build config 00:03:27.525 net/mvpp2: not in enabled drivers build config 00:03:27.525 net/netvsc: not in enabled drivers build config 00:03:27.525 net/nfb: not in enabled drivers build config 00:03:27.525 net/nfp: not in enabled drivers build config 00:03:27.525 net/ngbe: not in enabled drivers build config 00:03:27.525 net/null: not in enabled drivers build config 00:03:27.525 net/octeontx: not in enabled drivers build config 00:03:27.525 net/octeon_ep: not in enabled drivers build config 00:03:27.525 net/pcap: not in enabled drivers build config 00:03:27.525 net/pfe: not in enabled drivers build config 00:03:27.525 net/qede: not in enabled drivers build config 00:03:27.525 net/ring: not in enabled drivers build config 00:03:27.525 net/sfc: not in enabled drivers build config 00:03:27.525 net/softnic: not in enabled drivers build config 00:03:27.525 net/tap: not in enabled drivers build config 00:03:27.525 net/thunderx: not in enabled drivers build config 00:03:27.525 net/txgbe: not in enabled drivers build config 00:03:27.525 net/vdev_netvsc: not in enabled drivers build config 00:03:27.525 net/vhost: not in enabled drivers build config 00:03:27.525 net/virtio: not in enabled drivers build config 00:03:27.525 net/vmxnet3: not in enabled drivers build config 00:03:27.525 raw/*: missing internal dependency, "rawdev" 00:03:27.525 crypto/armv8: not in enabled drivers build config 00:03:27.525 crypto/bcmfs: not in enabled drivers build config 00:03:27.525 crypto/caam_jr: not in enabled drivers build config 00:03:27.525 crypto/ccp: not in enabled drivers build config 00:03:27.525 crypto/cnxk: not in enabled drivers build config 00:03:27.525 crypto/dpaa_sec: not in enabled drivers build config 00:03:27.525 crypto/dpaa2_sec: not in enabled drivers build config 00:03:27.525 crypto/ipsec_mb: not in enabled drivers build config 00:03:27.525 crypto/mlx5: not in enabled drivers build config 00:03:27.525 crypto/mvsam: not in enabled drivers build config 00:03:27.525 crypto/nitrox: not in enabled drivers build config 00:03:27.525 crypto/null: not in enabled drivers build config 00:03:27.525 crypto/octeontx: not in enabled drivers build config 00:03:27.525 crypto/openssl: not in enabled drivers build config 00:03:27.525 crypto/scheduler: not in enabled drivers build config 00:03:27.525 crypto/uadk: not in enabled drivers build config 00:03:27.525 crypto/virtio: not in enabled drivers build config 00:03:27.525 compress/isal: not in enabled drivers build config 00:03:27.525 compress/mlx5: not in enabled drivers build config 00:03:27.525 compress/nitrox: not in enabled drivers build config 00:03:27.525 compress/octeontx: not in enabled drivers build config 00:03:27.525 compress/zlib: not in enabled drivers build config 00:03:27.525 regex/*: missing internal dependency, "regexdev" 00:03:27.525 ml/*: missing internal dependency, "mldev" 00:03:27.525 vdpa/ifc: not in enabled drivers build config 00:03:27.525 vdpa/mlx5: not in enabled drivers build config 00:03:27.525 vdpa/nfp: not in enabled drivers build config 00:03:27.525 vdpa/sfc: not in enabled drivers build config 00:03:27.525 event/*: missing internal dependency, "eventdev" 00:03:27.525 baseband/*: missing internal dependency, "bbdev" 00:03:27.525 gpu/*: missing internal dependency, "gpudev" 00:03:27.525 00:03:27.525 00:03:27.525 Build targets in project: 85 00:03:27.525 00:03:27.525 DPDK 24.03.0 00:03:27.525 00:03:27.525 User defined options 00:03:27.525 buildtype : debug 00:03:27.525 default_library : shared 00:03:27.525 libdir : lib 00:03:27.525 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:27.525 b_sanitize : address 00:03:27.525 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:27.525 c_link_args : 00:03:27.525 cpu_instruction_set: native 00:03:27.525 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:27.525 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:27.525 enable_docs : false 00:03:27.525 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:27.525 enable_kmods : false 00:03:27.525 max_lcores : 128 00:03:27.525 tests : false 00:03:27.526 00:03:27.526 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:27.526 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:27.526 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:27.526 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:27.526 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:27.526 [4/268] Linking static target lib/librte_kvargs.a 00:03:27.526 [5/268] Linking static target lib/librte_log.a 00:03:27.526 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:27.526 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:27.526 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:27.526 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:27.526 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:27.526 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:27.526 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.526 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:27.526 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:27.526 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:27.783 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:27.783 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:27.783 [18/268] Linking static target lib/librte_telemetry.a 00:03:28.042 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.042 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:28.042 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:28.301 [22/268] Linking target lib/librte_log.so.24.1 00:03:28.301 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:28.301 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:28.301 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:28.301 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:28.301 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:28.301 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:28.301 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:28.301 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:28.560 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:28.560 [32/268] Linking target lib/librte_kvargs.so.24.1 00:03:28.560 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.819 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:28.819 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:28.819 [36/268] Linking target lib/librte_telemetry.so.24.1 00:03:28.819 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:28.819 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:28.819 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:28.819 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:28.819 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:28.819 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:28.819 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:29.078 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:29.078 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:29.078 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:29.078 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:29.078 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:29.337 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:29.337 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.337 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:29.337 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:29.596 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:29.596 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:29.596 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:29.596 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:29.596 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:29.855 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:29.855 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:29.855 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:29.855 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:29.855 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:29.855 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:30.115 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:30.115 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:30.115 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:30.115 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:30.375 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:30.375 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:30.375 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:30.375 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:30.375 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:30.634 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:30.634 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:30.634 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:30.634 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:30.634 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:30.634 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:30.634 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:30.634 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:30.634 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:30.894 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:30.894 [83/268] Linking static target lib/librte_ring.a 00:03:30.894 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:30.894 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:30.894 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:31.153 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:31.153 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:31.153 [89/268] Linking static target lib/librte_rcu.a 00:03:31.153 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:31.153 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:31.153 [92/268] Linking static target lib/librte_eal.a 00:03:31.415 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:31.415 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:31.415 [95/268] Linking static target lib/librte_mempool.a 00:03:31.415 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:31.415 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.415 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:31.674 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:31.674 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:31.674 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.933 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:31.933 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:31.933 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:31.933 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:31.933 [106/268] Linking static target lib/librte_meter.a 00:03:31.933 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:31.933 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:31.933 [109/268] Linking static target lib/librte_net.a 00:03:32.192 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:32.192 [111/268] Linking static target lib/librte_mbuf.a 00:03:32.192 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:32.192 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:32.451 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:32.451 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.451 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:32.451 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.451 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.710 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:32.970 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:33.229 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.229 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:33.229 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:33.229 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:33.229 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:33.229 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:33.488 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:33.488 [128/268] Linking static target lib/librte_pci.a 00:03:33.488 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:33.488 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:33.488 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:33.488 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:33.488 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:33.747 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:33.747 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:33.747 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:33.747 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:33.747 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.747 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:33.747 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:33.747 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:33.747 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:33.747 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:34.007 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:34.007 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:34.007 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:34.007 [147/268] Linking static target lib/librte_cmdline.a 00:03:34.266 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:34.266 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:34.266 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:34.266 [151/268] Linking static target lib/librte_timer.a 00:03:34.266 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:34.266 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:34.527 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:34.786 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:34.786 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:34.786 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:34.786 [158/268] Linking static target lib/librte_compressdev.a 00:03:34.786 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:35.046 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:35.046 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:35.046 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.046 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:35.046 [164/268] Linking static target lib/librte_hash.a 00:03:35.046 [165/268] Linking static target lib/librte_ethdev.a 00:03:35.046 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:35.305 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:35.305 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:35.305 [169/268] Linking static target lib/librte_dmadev.a 00:03:35.305 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:35.305 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:35.564 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:35.564 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.823 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:35.823 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.823 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:36.081 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:36.081 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:36.081 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:36.081 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:36.081 [181/268] Linking static target lib/librte_cryptodev.a 00:03:36.081 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.081 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:36.339 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:36.339 [185/268] Linking static target lib/librte_power.a 00:03:36.339 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.598 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:36.598 [188/268] Linking static target lib/librte_reorder.a 00:03:36.598 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:36.598 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:36.598 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:36.857 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:36.857 [193/268] Linking static target lib/librte_security.a 00:03:37.116 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.116 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:37.375 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.634 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:37.634 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.634 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:37.634 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:37.634 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:37.893 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:38.151 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:38.151 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:38.152 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:38.152 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:38.409 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:38.410 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:38.410 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:38.410 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:38.668 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.668 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:38.668 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:38.668 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.668 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.668 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.668 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.668 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:38.668 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:38.668 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:38.668 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:38.926 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:38.926 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.926 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.927 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.927 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:39.186 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.121 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:43.410 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:43.410 [230/268] Linking static target lib/librte_vhost.a 00:03:43.979 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.979 [232/268] Linking target lib/librte_eal.so.24.1 00:03:44.238 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.238 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.238 [235/268] Linking target lib/librte_ring.so.24.1 00:03:44.238 [236/268] Linking target lib/librte_meter.so.24.1 00:03:44.238 [237/268] Linking target lib/librte_pci.so.24.1 00:03:44.238 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:44.238 [239/268] Linking target lib/librte_timer.so.24.1 00:03:44.238 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.238 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.238 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.497 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.497 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:44.497 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:44.497 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:44.497 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.497 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.497 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.497 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:44.497 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:44.497 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:44.756 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:44.756 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:44.756 [255/268] Linking target lib/librte_net.so.24.1 00:03:44.756 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:44.756 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:45.016 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:45.016 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:45.016 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:45.016 [261/268] Linking target lib/librte_security.so.24.1 00:03:45.016 [262/268] Linking target lib/librte_hash.so.24.1 00:03:45.016 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:45.276 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.276 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.276 [266/268] Linking target lib/librte_power.so.24.1 00:03:45.276 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.535 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:45.535 INFO: autodetecting backend as ninja 00:03:45.535 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:03.795 CC lib/ut/ut.o 00:04:03.795 CC lib/ut_mock/mock.o 00:04:03.795 CC lib/log/log.o 00:04:03.795 CC lib/log/log_flags.o 00:04:03.795 CC lib/log/log_deprecated.o 00:04:03.795 LIB libspdk_ut_mock.a 00:04:03.795 LIB libspdk_log.a 00:04:03.795 LIB libspdk_ut.a 00:04:03.795 SO libspdk_ut_mock.so.6.0 00:04:03.795 SO libspdk_ut.so.2.0 00:04:03.795 SO libspdk_log.so.7.1 00:04:03.795 SYMLINK libspdk_ut.so 00:04:03.795 SYMLINK libspdk_ut_mock.so 00:04:03.795 SYMLINK libspdk_log.so 00:04:03.795 CXX lib/trace_parser/trace.o 00:04:03.795 CC lib/dma/dma.o 00:04:03.795 CC lib/util/base64.o 00:04:03.795 CC lib/ioat/ioat.o 00:04:03.795 CC lib/util/cpuset.o 00:04:03.795 CC lib/util/crc16.o 00:04:03.795 CC lib/util/bit_array.o 00:04:03.795 CC lib/util/crc32c.o 00:04:03.795 CC lib/util/crc32.o 00:04:03.795 CC lib/vfio_user/host/vfio_user_pci.o 00:04:03.795 CC lib/util/crc32_ieee.o 00:04:03.795 CC lib/util/crc64.o 00:04:03.795 CC lib/util/dif.o 00:04:03.795 CC lib/vfio_user/host/vfio_user.o 00:04:03.795 LIB libspdk_dma.a 00:04:03.795 CC lib/util/fd.o 00:04:03.795 SO libspdk_dma.so.5.0 00:04:03.795 CC lib/util/fd_group.o 00:04:03.795 CC lib/util/file.o 00:04:03.795 SYMLINK libspdk_dma.so 00:04:03.795 CC lib/util/hexlify.o 00:04:03.795 CC lib/util/iov.o 00:04:03.795 LIB libspdk_ioat.a 00:04:03.795 SO libspdk_ioat.so.7.0 00:04:03.795 CC lib/util/math.o 00:04:03.795 SYMLINK libspdk_ioat.so 00:04:03.795 CC lib/util/net.o 00:04:03.795 CC lib/util/pipe.o 00:04:03.795 LIB libspdk_vfio_user.a 00:04:03.795 CC lib/util/strerror_tls.o 00:04:03.795 CC lib/util/string.o 00:04:03.795 SO libspdk_vfio_user.so.5.0 00:04:03.795 CC lib/util/uuid.o 00:04:03.795 CC lib/util/xor.o 00:04:03.795 SYMLINK libspdk_vfio_user.so 00:04:03.795 CC lib/util/zipf.o 00:04:03.796 CC lib/util/md5.o 00:04:03.796 LIB libspdk_util.a 00:04:03.796 SO libspdk_util.so.10.1 00:04:03.796 LIB libspdk_trace_parser.a 00:04:03.796 SO libspdk_trace_parser.so.6.0 00:04:03.796 SYMLINK libspdk_util.so 00:04:03.796 SYMLINK libspdk_trace_parser.so 00:04:03.796 CC lib/rdma_utils/rdma_utils.o 00:04:03.796 CC lib/vmd/vmd.o 00:04:03.796 CC lib/vmd/led.o 00:04:03.796 CC lib/conf/conf.o 00:04:03.796 CC lib/idxd/idxd.o 00:04:03.796 CC lib/idxd/idxd_user.o 00:04:03.796 CC lib/idxd/idxd_kernel.o 00:04:03.796 CC lib/env_dpdk/env.o 00:04:03.796 CC lib/json/json_parse.o 00:04:03.796 CC lib/json/json_util.o 00:04:03.796 CC lib/env_dpdk/memory.o 00:04:03.796 CC lib/env_dpdk/pci.o 00:04:03.796 LIB libspdk_conf.a 00:04:03.796 SO libspdk_conf.so.6.0 00:04:03.796 LIB libspdk_rdma_utils.a 00:04:03.796 CC lib/env_dpdk/init.o 00:04:03.796 CC lib/env_dpdk/threads.o 00:04:03.796 CC lib/json/json_write.o 00:04:03.796 SYMLINK libspdk_conf.so 00:04:03.796 CC lib/env_dpdk/pci_ioat.o 00:04:03.796 SO libspdk_rdma_utils.so.1.0 00:04:03.796 SYMLINK libspdk_rdma_utils.so 00:04:04.054 CC lib/env_dpdk/pci_virtio.o 00:04:04.054 CC lib/env_dpdk/pci_vmd.o 00:04:04.054 CC lib/rdma_provider/common.o 00:04:04.054 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:04.054 CC lib/env_dpdk/pci_idxd.o 00:04:04.054 LIB libspdk_json.a 00:04:04.054 CC lib/env_dpdk/pci_event.o 00:04:04.054 SO libspdk_json.so.6.0 00:04:04.054 CC lib/env_dpdk/sigbus_handler.o 00:04:04.054 CC lib/env_dpdk/pci_dpdk.o 00:04:04.313 SYMLINK libspdk_json.so 00:04:04.313 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:04.313 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:04.313 LIB libspdk_vmd.a 00:04:04.313 SO libspdk_vmd.so.6.0 00:04:04.313 LIB libspdk_idxd.a 00:04:04.313 LIB libspdk_rdma_provider.a 00:04:04.313 SO libspdk_idxd.so.12.1 00:04:04.313 SYMLINK libspdk_vmd.so 00:04:04.313 SO libspdk_rdma_provider.so.7.0 00:04:04.313 SYMLINK libspdk_idxd.so 00:04:04.313 SYMLINK libspdk_rdma_provider.so 00:04:04.313 CC lib/jsonrpc/jsonrpc_server.o 00:04:04.313 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:04.313 CC lib/jsonrpc/jsonrpc_client.o 00:04:04.313 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:04.572 LIB libspdk_jsonrpc.a 00:04:04.832 SO libspdk_jsonrpc.so.6.0 00:04:04.832 SYMLINK libspdk_jsonrpc.so 00:04:05.092 LIB libspdk_env_dpdk.a 00:04:05.092 SO libspdk_env_dpdk.so.15.1 00:04:05.351 CC lib/rpc/rpc.o 00:04:05.351 SYMLINK libspdk_env_dpdk.so 00:04:05.611 LIB libspdk_rpc.a 00:04:05.611 SO libspdk_rpc.so.6.0 00:04:05.611 SYMLINK libspdk_rpc.so 00:04:06.180 CC lib/notify/notify.o 00:04:06.180 CC lib/keyring/keyring.o 00:04:06.180 CC lib/notify/notify_rpc.o 00:04:06.180 CC lib/keyring/keyring_rpc.o 00:04:06.180 CC lib/trace/trace_flags.o 00:04:06.180 CC lib/trace/trace.o 00:04:06.180 CC lib/trace/trace_rpc.o 00:04:06.180 LIB libspdk_notify.a 00:04:06.180 LIB libspdk_trace.a 00:04:06.438 SO libspdk_notify.so.6.0 00:04:06.438 LIB libspdk_keyring.a 00:04:06.438 SO libspdk_trace.so.11.0 00:04:06.438 SYMLINK libspdk_notify.so 00:04:06.438 SO libspdk_keyring.so.2.0 00:04:06.438 SYMLINK libspdk_keyring.so 00:04:06.438 SYMLINK libspdk_trace.so 00:04:07.006 CC lib/sock/sock.o 00:04:07.006 CC lib/sock/sock_rpc.o 00:04:07.006 CC lib/thread/thread.o 00:04:07.006 CC lib/thread/iobuf.o 00:04:07.265 LIB libspdk_sock.a 00:04:07.265 SO libspdk_sock.so.10.0 00:04:07.265 SYMLINK libspdk_sock.so 00:04:07.835 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:07.835 CC lib/nvme/nvme_ctrlr.o 00:04:07.835 CC lib/nvme/nvme_ns_cmd.o 00:04:07.835 CC lib/nvme/nvme_fabric.o 00:04:07.835 CC lib/nvme/nvme_ns.o 00:04:07.835 CC lib/nvme/nvme_pcie_common.o 00:04:07.835 CC lib/nvme/nvme_pcie.o 00:04:07.835 CC lib/nvme/nvme_qpair.o 00:04:07.835 CC lib/nvme/nvme.o 00:04:08.404 CC lib/nvme/nvme_quirks.o 00:04:08.404 CC lib/nvme/nvme_transport.o 00:04:08.404 LIB libspdk_thread.a 00:04:08.404 CC lib/nvme/nvme_discovery.o 00:04:08.404 SO libspdk_thread.so.11.0 00:04:08.404 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:08.663 SYMLINK libspdk_thread.so 00:04:08.663 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:08.663 CC lib/nvme/nvme_tcp.o 00:04:08.663 CC lib/nvme/nvme_opal.o 00:04:08.663 CC lib/accel/accel.o 00:04:08.922 CC lib/blob/blobstore.o 00:04:08.922 CC lib/init/json_config.o 00:04:09.181 CC lib/accel/accel_rpc.o 00:04:09.181 CC lib/accel/accel_sw.o 00:04:09.181 CC lib/virtio/virtio.o 00:04:09.181 CC lib/fsdev/fsdev.o 00:04:09.181 CC lib/init/subsystem.o 00:04:09.181 CC lib/virtio/virtio_vhost_user.o 00:04:09.441 CC lib/blob/request.o 00:04:09.441 CC lib/init/subsystem_rpc.o 00:04:09.441 CC lib/blob/zeroes.o 00:04:09.441 CC lib/blob/blob_bs_dev.o 00:04:09.700 CC lib/init/rpc.o 00:04:09.700 CC lib/nvme/nvme_io_msg.o 00:04:09.700 CC lib/virtio/virtio_vfio_user.o 00:04:09.700 CC lib/nvme/nvme_poll_group.o 00:04:09.700 CC lib/virtio/virtio_pci.o 00:04:09.700 LIB libspdk_init.a 00:04:09.700 SO libspdk_init.so.6.0 00:04:09.700 CC lib/fsdev/fsdev_io.o 00:04:09.959 SYMLINK libspdk_init.so 00:04:09.959 CC lib/fsdev/fsdev_rpc.o 00:04:09.959 CC lib/nvme/nvme_zns.o 00:04:09.959 LIB libspdk_accel.a 00:04:09.960 SO libspdk_accel.so.16.0 00:04:09.960 LIB libspdk_virtio.a 00:04:09.960 SYMLINK libspdk_accel.so 00:04:09.960 CC lib/nvme/nvme_stubs.o 00:04:09.960 SO libspdk_virtio.so.7.0 00:04:09.960 CC lib/nvme/nvme_auth.o 00:04:10.219 SYMLINK libspdk_virtio.so 00:04:10.219 CC lib/nvme/nvme_cuse.o 00:04:10.219 LIB libspdk_fsdev.a 00:04:10.219 CC lib/event/app.o 00:04:10.219 CC lib/event/reactor.o 00:04:10.219 SO libspdk_fsdev.so.2.0 00:04:10.219 CC lib/event/log_rpc.o 00:04:10.219 SYMLINK libspdk_fsdev.so 00:04:10.219 CC lib/event/app_rpc.o 00:04:10.478 CC lib/bdev/bdev.o 00:04:10.478 CC lib/bdev/bdev_rpc.o 00:04:10.478 CC lib/bdev/bdev_zone.o 00:04:10.478 CC lib/event/scheduler_static.o 00:04:10.737 CC lib/nvme/nvme_rdma.o 00:04:10.737 CC lib/bdev/part.o 00:04:10.737 CC lib/bdev/scsi_nvme.o 00:04:10.738 LIB libspdk_event.a 00:04:10.738 SO libspdk_event.so.14.0 00:04:10.738 SYMLINK libspdk_event.so 00:04:10.738 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:11.676 LIB libspdk_fuse_dispatcher.a 00:04:11.676 SO libspdk_fuse_dispatcher.so.1.0 00:04:11.676 SYMLINK libspdk_fuse_dispatcher.so 00:04:12.245 LIB libspdk_nvme.a 00:04:12.245 SO libspdk_nvme.so.15.0 00:04:12.503 LIB libspdk_blob.a 00:04:12.503 SYMLINK libspdk_nvme.so 00:04:12.763 SO libspdk_blob.so.12.0 00:04:12.763 SYMLINK libspdk_blob.so 00:04:13.332 CC lib/lvol/lvol.o 00:04:13.332 CC lib/blobfs/tree.o 00:04:13.332 CC lib/blobfs/blobfs.o 00:04:13.332 LIB libspdk_bdev.a 00:04:13.332 SO libspdk_bdev.so.17.0 00:04:13.332 SYMLINK libspdk_bdev.so 00:04:13.590 CC lib/ftl/ftl_core.o 00:04:13.590 CC lib/ftl/ftl_init.o 00:04:13.590 CC lib/ftl/ftl_debug.o 00:04:13.590 CC lib/ftl/ftl_layout.o 00:04:13.590 CC lib/nvmf/ctrlr.o 00:04:13.590 CC lib/scsi/dev.o 00:04:13.590 CC lib/ublk/ublk.o 00:04:13.590 CC lib/nbd/nbd.o 00:04:13.849 CC lib/nbd/nbd_rpc.o 00:04:13.849 CC lib/nvmf/ctrlr_discovery.o 00:04:13.849 CC lib/scsi/lun.o 00:04:14.109 CC lib/nvmf/ctrlr_bdev.o 00:04:14.109 CC lib/nvmf/subsystem.o 00:04:14.109 LIB libspdk_blobfs.a 00:04:14.109 CC lib/ftl/ftl_io.o 00:04:14.109 SO libspdk_blobfs.so.11.0 00:04:14.109 LIB libspdk_nbd.a 00:04:14.109 SO libspdk_nbd.so.7.0 00:04:14.109 SYMLINK libspdk_blobfs.so 00:04:14.109 CC lib/nvmf/nvmf.o 00:04:14.109 LIB libspdk_lvol.a 00:04:14.109 SYMLINK libspdk_nbd.so 00:04:14.109 CC lib/nvmf/nvmf_rpc.o 00:04:14.109 CC lib/scsi/port.o 00:04:14.109 SO libspdk_lvol.so.11.0 00:04:14.368 SYMLINK libspdk_lvol.so 00:04:14.368 CC lib/nvmf/transport.o 00:04:14.368 CC lib/ublk/ublk_rpc.o 00:04:14.368 CC lib/ftl/ftl_sb.o 00:04:14.368 CC lib/nvmf/tcp.o 00:04:14.368 CC lib/scsi/scsi.o 00:04:14.626 LIB libspdk_ublk.a 00:04:14.626 CC lib/scsi/scsi_bdev.o 00:04:14.626 CC lib/ftl/ftl_l2p.o 00:04:14.626 SO libspdk_ublk.so.3.0 00:04:14.626 SYMLINK libspdk_ublk.so 00:04:14.626 CC lib/scsi/scsi_pr.o 00:04:14.626 CC lib/scsi/scsi_rpc.o 00:04:14.884 CC lib/ftl/ftl_l2p_flat.o 00:04:14.884 CC lib/scsi/task.o 00:04:14.884 CC lib/nvmf/stubs.o 00:04:14.884 CC lib/ftl/ftl_nv_cache.o 00:04:15.143 CC lib/ftl/ftl_band.o 00:04:15.143 CC lib/ftl/ftl_band_ops.o 00:04:15.143 LIB libspdk_scsi.a 00:04:15.143 CC lib/nvmf/mdns_server.o 00:04:15.143 SO libspdk_scsi.so.9.0 00:04:15.143 CC lib/ftl/ftl_writer.o 00:04:15.143 SYMLINK libspdk_scsi.so 00:04:15.143 CC lib/ftl/ftl_rq.o 00:04:15.401 CC lib/nvmf/rdma.o 00:04:15.401 CC lib/nvmf/auth.o 00:04:15.401 CC lib/ftl/ftl_reloc.o 00:04:15.401 CC lib/ftl/ftl_l2p_cache.o 00:04:15.401 CC lib/ftl/ftl_p2l.o 00:04:15.401 CC lib/ftl/ftl_p2l_log.o 00:04:15.401 CC lib/ftl/mngt/ftl_mngt.o 00:04:15.660 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:15.917 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:15.917 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:15.917 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:15.917 CC lib/iscsi/conn.o 00:04:15.917 CC lib/iscsi/init_grp.o 00:04:15.917 CC lib/vhost/vhost.o 00:04:15.917 CC lib/vhost/vhost_rpc.o 00:04:15.917 CC lib/vhost/vhost_scsi.o 00:04:16.174 CC lib/iscsi/iscsi.o 00:04:16.174 CC lib/iscsi/param.o 00:04:16.174 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:16.174 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:16.174 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:16.432 CC lib/vhost/vhost_blk.o 00:04:16.432 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:16.432 CC lib/iscsi/portal_grp.o 00:04:16.432 CC lib/iscsi/tgt_node.o 00:04:16.690 CC lib/vhost/rte_vhost_user.o 00:04:16.690 CC lib/iscsi/iscsi_subsystem.o 00:04:16.690 CC lib/iscsi/iscsi_rpc.o 00:04:16.690 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:16.948 CC lib/iscsi/task.o 00:04:16.948 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:16.948 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:16.948 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:16.948 CC lib/ftl/utils/ftl_conf.o 00:04:17.206 CC lib/ftl/utils/ftl_md.o 00:04:17.206 CC lib/ftl/utils/ftl_mempool.o 00:04:17.206 CC lib/ftl/utils/ftl_bitmap.o 00:04:17.206 CC lib/ftl/utils/ftl_property.o 00:04:17.206 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:17.206 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:17.206 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:17.464 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:17.464 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:17.464 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:17.464 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:17.464 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:17.464 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:17.464 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:17.464 LIB libspdk_nvmf.a 00:04:17.464 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:17.464 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:17.723 LIB libspdk_vhost.a 00:04:17.723 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:17.723 LIB libspdk_iscsi.a 00:04:17.723 CC lib/ftl/base/ftl_base_dev.o 00:04:17.723 CC lib/ftl/base/ftl_base_bdev.o 00:04:17.723 SO libspdk_vhost.so.8.0 00:04:17.723 CC lib/ftl/ftl_trace.o 00:04:17.723 SO libspdk_nvmf.so.20.0 00:04:17.723 SO libspdk_iscsi.so.8.0 00:04:17.723 SYMLINK libspdk_vhost.so 00:04:17.980 SYMLINK libspdk_iscsi.so 00:04:17.980 LIB libspdk_ftl.a 00:04:17.980 SYMLINK libspdk_nvmf.so 00:04:18.240 SO libspdk_ftl.so.9.0 00:04:18.498 SYMLINK libspdk_ftl.so 00:04:19.064 CC module/env_dpdk/env_dpdk_rpc.o 00:04:19.064 CC module/keyring/file/keyring.o 00:04:19.064 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:19.064 CC module/keyring/linux/keyring.o 00:04:19.064 CC module/fsdev/aio/fsdev_aio.o 00:04:19.064 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:19.064 CC module/scheduler/gscheduler/gscheduler.o 00:04:19.064 CC module/blob/bdev/blob_bdev.o 00:04:19.064 CC module/accel/error/accel_error.o 00:04:19.064 CC module/sock/posix/posix.o 00:04:19.064 LIB libspdk_env_dpdk_rpc.a 00:04:19.064 SO libspdk_env_dpdk_rpc.so.6.0 00:04:19.322 SYMLINK libspdk_env_dpdk_rpc.so 00:04:19.322 CC module/accel/error/accel_error_rpc.o 00:04:19.322 CC module/keyring/file/keyring_rpc.o 00:04:19.322 CC module/keyring/linux/keyring_rpc.o 00:04:19.322 LIB libspdk_scheduler_gscheduler.a 00:04:19.322 LIB libspdk_scheduler_dpdk_governor.a 00:04:19.322 SO libspdk_scheduler_gscheduler.so.4.0 00:04:19.322 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:19.322 LIB libspdk_scheduler_dynamic.a 00:04:19.322 SO libspdk_scheduler_dynamic.so.4.0 00:04:19.322 SYMLINK libspdk_scheduler_gscheduler.so 00:04:19.322 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:19.322 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:19.322 CC module/fsdev/aio/linux_aio_mgr.o 00:04:19.322 LIB libspdk_keyring_linux.a 00:04:19.322 LIB libspdk_keyring_file.a 00:04:19.322 LIB libspdk_accel_error.a 00:04:19.322 SYMLINK libspdk_scheduler_dynamic.so 00:04:19.322 LIB libspdk_blob_bdev.a 00:04:19.322 SO libspdk_keyring_file.so.2.0 00:04:19.322 SO libspdk_keyring_linux.so.1.0 00:04:19.322 SO libspdk_accel_error.so.2.0 00:04:19.322 SO libspdk_blob_bdev.so.12.0 00:04:19.581 SYMLINK libspdk_keyring_file.so 00:04:19.581 SYMLINK libspdk_accel_error.so 00:04:19.581 SYMLINK libspdk_keyring_linux.so 00:04:19.581 SYMLINK libspdk_blob_bdev.so 00:04:19.581 CC module/accel/ioat/accel_ioat.o 00:04:19.581 CC module/accel/ioat/accel_ioat_rpc.o 00:04:19.581 CC module/accel/dsa/accel_dsa.o 00:04:19.581 CC module/accel/dsa/accel_dsa_rpc.o 00:04:19.581 CC module/accel/iaa/accel_iaa.o 00:04:19.581 LIB libspdk_accel_ioat.a 00:04:19.840 SO libspdk_accel_ioat.so.6.0 00:04:19.840 CC module/bdev/error/vbdev_error.o 00:04:19.840 CC module/bdev/delay/vbdev_delay.o 00:04:19.840 CC module/blobfs/bdev/blobfs_bdev.o 00:04:19.840 SYMLINK libspdk_accel_ioat.so 00:04:19.840 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:19.840 CC module/bdev/gpt/gpt.o 00:04:19.840 LIB libspdk_fsdev_aio.a 00:04:19.840 LIB libspdk_accel_dsa.a 00:04:19.840 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.840 SO libspdk_fsdev_aio.so.1.0 00:04:19.840 LIB libspdk_sock_posix.a 00:04:19.840 SO libspdk_accel_dsa.so.5.0 00:04:19.840 CC module/bdev/lvol/vbdev_lvol.o 00:04:19.840 SO libspdk_sock_posix.so.6.0 00:04:19.840 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:19.840 SYMLINK libspdk_fsdev_aio.so 00:04:19.840 CC module/bdev/gpt/vbdev_gpt.o 00:04:19.840 SYMLINK libspdk_accel_dsa.so 00:04:20.099 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:20.099 LIB libspdk_blobfs_bdev.a 00:04:20.099 LIB libspdk_accel_iaa.a 00:04:20.099 CC module/bdev/error/vbdev_error_rpc.o 00:04:20.099 SO libspdk_blobfs_bdev.so.6.0 00:04:20.099 SYMLINK libspdk_sock_posix.so 00:04:20.099 SO libspdk_accel_iaa.so.3.0 00:04:20.099 SYMLINK libspdk_blobfs_bdev.so 00:04:20.099 SYMLINK libspdk_accel_iaa.so 00:04:20.099 LIB libspdk_bdev_delay.a 00:04:20.099 CC module/bdev/malloc/bdev_malloc.o 00:04:20.099 SO libspdk_bdev_delay.so.6.0 00:04:20.099 LIB libspdk_bdev_error.a 00:04:20.099 CC module/bdev/null/bdev_null.o 00:04:20.099 SO libspdk_bdev_error.so.6.0 00:04:20.099 CC module/bdev/nvme/bdev_nvme.o 00:04:20.358 SYMLINK libspdk_bdev_delay.so 00:04:20.358 LIB libspdk_bdev_gpt.a 00:04:20.358 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:20.358 CC module/bdev/passthru/vbdev_passthru.o 00:04:20.358 SO libspdk_bdev_gpt.so.6.0 00:04:20.358 CC module/bdev/raid/bdev_raid.o 00:04:20.358 SYMLINK libspdk_bdev_error.so 00:04:20.358 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:20.358 SYMLINK libspdk_bdev_gpt.so 00:04:20.358 LIB libspdk_bdev_lvol.a 00:04:20.618 CC module/bdev/null/bdev_null_rpc.o 00:04:20.618 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.618 SO libspdk_bdev_lvol.so.6.0 00:04:20.618 CC module/bdev/split/vbdev_split.o 00:04:20.618 LIB libspdk_bdev_passthru.a 00:04:20.618 SYMLINK libspdk_bdev_lvol.so 00:04:20.618 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:20.618 SO libspdk_bdev_passthru.so.6.0 00:04:20.618 CC module/bdev/xnvme/bdev_xnvme.o 00:04:20.618 LIB libspdk_bdev_null.a 00:04:20.618 SO libspdk_bdev_null.so.6.0 00:04:20.618 SYMLINK libspdk_bdev_passthru.so 00:04:20.618 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.618 LIB libspdk_bdev_malloc.a 00:04:20.618 CC module/bdev/aio/bdev_aio.o 00:04:20.879 SYMLINK libspdk_bdev_null.so 00:04:20.879 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.879 SO libspdk_bdev_malloc.so.6.0 00:04:20.879 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.879 SYMLINK libspdk_bdev_malloc.so 00:04:20.879 LIB libspdk_bdev_split.a 00:04:20.879 CC module/bdev/ftl/bdev_ftl.o 00:04:20.879 SO libspdk_bdev_split.so.6.0 00:04:20.879 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:20.879 SYMLINK libspdk_bdev_split.so 00:04:20.879 CC module/bdev/nvme/nvme_rpc.o 00:04:20.879 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.879 LIB libspdk_bdev_zone_block.a 00:04:21.139 SO libspdk_bdev_zone_block.so.6.0 00:04:21.139 CC module/bdev/iscsi/bdev_iscsi.o 00:04:21.139 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:21.139 LIB libspdk_bdev_aio.a 00:04:21.139 LIB libspdk_bdev_xnvme.a 00:04:21.139 SYMLINK libspdk_bdev_zone_block.so 00:04:21.139 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:21.139 SO libspdk_bdev_xnvme.so.3.0 00:04:21.139 SO libspdk_bdev_aio.so.6.0 00:04:21.139 LIB libspdk_bdev_ftl.a 00:04:21.139 CC module/bdev/raid/bdev_raid_rpc.o 00:04:21.139 SYMLINK libspdk_bdev_aio.so 00:04:21.139 CC module/bdev/raid/bdev_raid_sb.o 00:04:21.139 CC module/bdev/nvme/bdev_mdns_client.o 00:04:21.139 SYMLINK libspdk_bdev_xnvme.so 00:04:21.139 CC module/bdev/raid/raid0.o 00:04:21.139 SO libspdk_bdev_ftl.so.6.0 00:04:21.399 CC module/bdev/raid/raid1.o 00:04:21.399 SYMLINK libspdk_bdev_ftl.so 00:04:21.399 CC module/bdev/raid/concat.o 00:04:21.399 CC module/bdev/nvme/vbdev_opal.o 00:04:21.399 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:21.399 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:21.399 LIB libspdk_bdev_iscsi.a 00:04:21.399 SO libspdk_bdev_iscsi.so.6.0 00:04:21.399 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:21.399 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:21.659 SYMLINK libspdk_bdev_iscsi.so 00:04:21.659 LIB libspdk_bdev_raid.a 00:04:21.659 SO libspdk_bdev_raid.so.6.0 00:04:21.659 LIB libspdk_bdev_virtio.a 00:04:21.659 SYMLINK libspdk_bdev_raid.so 00:04:21.659 SO libspdk_bdev_virtio.so.6.0 00:04:21.918 SYMLINK libspdk_bdev_virtio.so 00:04:22.858 LIB libspdk_bdev_nvme.a 00:04:23.118 SO libspdk_bdev_nvme.so.7.1 00:04:23.118 SYMLINK libspdk_bdev_nvme.so 00:04:23.688 CC module/event/subsystems/iobuf/iobuf.o 00:04:23.688 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:23.688 CC module/event/subsystems/keyring/keyring.o 00:04:23.688 CC module/event/subsystems/sock/sock.o 00:04:23.688 CC module/event/subsystems/vmd/vmd.o 00:04:23.688 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:23.688 CC module/event/subsystems/fsdev/fsdev.o 00:04:23.688 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:23.688 CC module/event/subsystems/scheduler/scheduler.o 00:04:23.948 LIB libspdk_event_sock.a 00:04:23.948 LIB libspdk_event_vmd.a 00:04:23.948 LIB libspdk_event_vhost_blk.a 00:04:23.948 LIB libspdk_event_fsdev.a 00:04:23.948 LIB libspdk_event_keyring.a 00:04:23.948 LIB libspdk_event_scheduler.a 00:04:23.948 LIB libspdk_event_iobuf.a 00:04:23.948 SO libspdk_event_sock.so.5.0 00:04:23.948 SO libspdk_event_vhost_blk.so.3.0 00:04:23.948 SO libspdk_event_fsdev.so.1.0 00:04:23.948 SO libspdk_event_keyring.so.1.0 00:04:23.948 SO libspdk_event_vmd.so.6.0 00:04:23.948 SO libspdk_event_scheduler.so.4.0 00:04:23.948 SO libspdk_event_iobuf.so.3.0 00:04:23.948 SYMLINK libspdk_event_sock.so 00:04:23.948 SYMLINK libspdk_event_keyring.so 00:04:23.948 SYMLINK libspdk_event_vhost_blk.so 00:04:23.948 SYMLINK libspdk_event_fsdev.so 00:04:23.948 SYMLINK libspdk_event_vmd.so 00:04:23.948 SYMLINK libspdk_event_scheduler.so 00:04:23.948 SYMLINK libspdk_event_iobuf.so 00:04:24.518 CC module/event/subsystems/accel/accel.o 00:04:24.518 LIB libspdk_event_accel.a 00:04:24.779 SO libspdk_event_accel.so.6.0 00:04:24.779 SYMLINK libspdk_event_accel.so 00:04:25.039 CC module/event/subsystems/bdev/bdev.o 00:04:25.299 LIB libspdk_event_bdev.a 00:04:25.299 SO libspdk_event_bdev.so.6.0 00:04:25.559 SYMLINK libspdk_event_bdev.so 00:04:25.819 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:25.819 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:25.819 CC module/event/subsystems/ublk/ublk.o 00:04:25.819 CC module/event/subsystems/nbd/nbd.o 00:04:25.819 CC module/event/subsystems/scsi/scsi.o 00:04:26.077 LIB libspdk_event_nbd.a 00:04:26.077 LIB libspdk_event_ublk.a 00:04:26.077 LIB libspdk_event_scsi.a 00:04:26.077 SO libspdk_event_nbd.so.6.0 00:04:26.077 SO libspdk_event_scsi.so.6.0 00:04:26.077 SO libspdk_event_ublk.so.3.0 00:04:26.077 LIB libspdk_event_nvmf.a 00:04:26.077 SYMLINK libspdk_event_scsi.so 00:04:26.077 SYMLINK libspdk_event_nbd.so 00:04:26.077 SO libspdk_event_nvmf.so.6.0 00:04:26.077 SYMLINK libspdk_event_ublk.so 00:04:26.336 SYMLINK libspdk_event_nvmf.so 00:04:26.336 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:26.336 CC module/event/subsystems/iscsi/iscsi.o 00:04:26.595 LIB libspdk_event_vhost_scsi.a 00:04:26.595 LIB libspdk_event_iscsi.a 00:04:26.595 SO libspdk_event_vhost_scsi.so.3.0 00:04:26.595 SO libspdk_event_iscsi.so.6.0 00:04:26.595 SYMLINK libspdk_event_iscsi.so 00:04:26.853 SYMLINK libspdk_event_vhost_scsi.so 00:04:26.853 SO libspdk.so.6.0 00:04:26.853 SYMLINK libspdk.so 00:04:27.422 CC app/trace_record/trace_record.o 00:04:27.422 CXX app/trace/trace.o 00:04:27.422 CC app/spdk_lspci/spdk_lspci.o 00:04:27.422 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:27.422 CC app/iscsi_tgt/iscsi_tgt.o 00:04:27.422 CC app/nvmf_tgt/nvmf_main.o 00:04:27.422 CC app/spdk_tgt/spdk_tgt.o 00:04:27.422 CC examples/ioat/perf/perf.o 00:04:27.422 CC examples/util/zipf/zipf.o 00:04:27.422 CC test/thread/poller_perf/poller_perf.o 00:04:27.422 LINK spdk_lspci 00:04:27.422 LINK nvmf_tgt 00:04:27.422 LINK iscsi_tgt 00:04:27.422 LINK interrupt_tgt 00:04:27.422 LINK zipf 00:04:27.422 LINK poller_perf 00:04:27.422 LINK spdk_trace_record 00:04:27.422 LINK spdk_tgt 00:04:27.681 LINK ioat_perf 00:04:27.681 CC app/spdk_nvme_perf/perf.o 00:04:27.682 LINK spdk_trace 00:04:27.682 CC app/spdk_nvme_identify/identify.o 00:04:27.682 CC app/spdk_nvme_discover/discovery_aer.o 00:04:27.682 CC app/spdk_top/spdk_top.o 00:04:27.942 TEST_HEADER include/spdk/accel.h 00:04:27.942 TEST_HEADER include/spdk/accel_module.h 00:04:27.942 TEST_HEADER include/spdk/assert.h 00:04:27.942 TEST_HEADER include/spdk/barrier.h 00:04:27.942 TEST_HEADER include/spdk/base64.h 00:04:27.942 TEST_HEADER include/spdk/bdev.h 00:04:27.942 CC examples/ioat/verify/verify.o 00:04:27.942 TEST_HEADER include/spdk/bdev_module.h 00:04:27.942 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.942 TEST_HEADER include/spdk/bit_array.h 00:04:27.942 TEST_HEADER include/spdk/bit_pool.h 00:04:27.942 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.942 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.942 TEST_HEADER include/spdk/blobfs.h 00:04:27.942 TEST_HEADER include/spdk/blob.h 00:04:27.942 TEST_HEADER include/spdk/conf.h 00:04:27.942 TEST_HEADER include/spdk/config.h 00:04:27.942 TEST_HEADER include/spdk/cpuset.h 00:04:27.942 TEST_HEADER include/spdk/crc16.h 00:04:27.942 TEST_HEADER include/spdk/crc32.h 00:04:27.942 TEST_HEADER include/spdk/crc64.h 00:04:27.942 TEST_HEADER include/spdk/dif.h 00:04:27.942 TEST_HEADER include/spdk/dma.h 00:04:27.942 TEST_HEADER include/spdk/endian.h 00:04:27.942 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.942 TEST_HEADER include/spdk/env.h 00:04:27.942 TEST_HEADER include/spdk/event.h 00:04:27.942 TEST_HEADER include/spdk/fd_group.h 00:04:27.942 TEST_HEADER include/spdk/fd.h 00:04:27.942 TEST_HEADER include/spdk/file.h 00:04:27.942 TEST_HEADER include/spdk/fsdev.h 00:04:27.942 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.942 TEST_HEADER include/spdk/ftl.h 00:04:27.942 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:27.942 CC examples/thread/thread/thread_ex.o 00:04:27.942 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.942 TEST_HEADER include/spdk/hexlify.h 00:04:27.942 CC test/dma/test_dma/test_dma.o 00:04:27.942 TEST_HEADER include/spdk/histogram_data.h 00:04:27.942 TEST_HEADER include/spdk/idxd.h 00:04:27.942 CC app/spdk_dd/spdk_dd.o 00:04:27.942 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.942 TEST_HEADER include/spdk/init.h 00:04:27.942 TEST_HEADER include/spdk/ioat.h 00:04:27.942 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.942 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.942 TEST_HEADER include/spdk/json.h 00:04:27.942 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.942 TEST_HEADER include/spdk/keyring.h 00:04:27.942 TEST_HEADER include/spdk/keyring_module.h 00:04:27.942 TEST_HEADER include/spdk/likely.h 00:04:27.942 TEST_HEADER include/spdk/log.h 00:04:27.942 TEST_HEADER include/spdk/lvol.h 00:04:27.942 TEST_HEADER include/spdk/md5.h 00:04:27.942 TEST_HEADER include/spdk/memory.h 00:04:27.942 TEST_HEADER include/spdk/mmio.h 00:04:27.942 TEST_HEADER include/spdk/nbd.h 00:04:27.942 CC test/app/bdev_svc/bdev_svc.o 00:04:27.942 TEST_HEADER include/spdk/net.h 00:04:27.942 TEST_HEADER include/spdk/notify.h 00:04:27.942 TEST_HEADER include/spdk/nvme.h 00:04:27.942 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.942 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.942 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.942 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.942 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.942 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.942 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.942 TEST_HEADER include/spdk/nvmf.h 00:04:27.942 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.942 LINK spdk_nvme_discover 00:04:27.942 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.942 TEST_HEADER include/spdk/opal.h 00:04:27.942 TEST_HEADER include/spdk/opal_spec.h 00:04:27.942 TEST_HEADER include/spdk/pci_ids.h 00:04:27.942 TEST_HEADER include/spdk/pipe.h 00:04:27.942 TEST_HEADER include/spdk/queue.h 00:04:27.942 TEST_HEADER include/spdk/reduce.h 00:04:27.942 TEST_HEADER include/spdk/rpc.h 00:04:27.942 TEST_HEADER include/spdk/scheduler.h 00:04:27.942 TEST_HEADER include/spdk/scsi.h 00:04:27.942 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.942 TEST_HEADER include/spdk/sock.h 00:04:27.942 TEST_HEADER include/spdk/stdinc.h 00:04:27.942 TEST_HEADER include/spdk/string.h 00:04:27.942 TEST_HEADER include/spdk/thread.h 00:04:27.942 TEST_HEADER include/spdk/trace.h 00:04:27.942 TEST_HEADER include/spdk/trace_parser.h 00:04:27.942 TEST_HEADER include/spdk/tree.h 00:04:27.942 TEST_HEADER include/spdk/ublk.h 00:04:27.942 TEST_HEADER include/spdk/util.h 00:04:27.942 TEST_HEADER include/spdk/uuid.h 00:04:27.942 TEST_HEADER include/spdk/version.h 00:04:27.942 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.942 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.942 TEST_HEADER include/spdk/vhost.h 00:04:27.942 TEST_HEADER include/spdk/vmd.h 00:04:27.942 TEST_HEADER include/spdk/xor.h 00:04:27.942 TEST_HEADER include/spdk/zipf.h 00:04:27.942 CXX test/cpp_headers/accel.o 00:04:27.942 LINK verify 00:04:28.201 LINK bdev_svc 00:04:28.201 LINK thread 00:04:28.201 CXX test/cpp_headers/accel_module.o 00:04:28.201 LINK spdk_dd 00:04:28.201 CC app/fio/nvme/fio_plugin.o 00:04:28.201 CC app/vhost/vhost.o 00:04:28.460 CXX test/cpp_headers/assert.o 00:04:28.460 LINK test_dma 00:04:28.460 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:28.460 LINK spdk_nvme_perf 00:04:28.460 LINK vhost 00:04:28.460 CC examples/sock/hello_world/hello_sock.o 00:04:28.460 CXX test/cpp_headers/barrier.o 00:04:28.460 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:28.718 LINK spdk_nvme_identify 00:04:28.718 CXX test/cpp_headers/base64.o 00:04:28.718 CXX test/cpp_headers/bdev.o 00:04:28.718 LINK spdk_top 00:04:28.718 CC app/fio/bdev/fio_plugin.o 00:04:28.718 LINK hello_sock 00:04:28.718 CC examples/vmd/lsvmd/lsvmd.o 00:04:28.977 LINK spdk_nvme 00:04:28.977 CXX test/cpp_headers/bdev_module.o 00:04:28.977 LINK nvme_fuzz 00:04:28.977 LINK lsvmd 00:04:28.977 CC test/app/histogram_perf/histogram_perf.o 00:04:28.977 CC examples/idxd/perf/perf.o 00:04:28.977 CC test/app/jsoncat/jsoncat.o 00:04:28.977 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:28.977 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:28.977 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:28.977 CXX test/cpp_headers/bdev_zone.o 00:04:29.236 LINK histogram_perf 00:04:29.236 LINK jsoncat 00:04:29.236 CC examples/vmd/led/led.o 00:04:29.236 CXX test/cpp_headers/bit_array.o 00:04:29.236 LINK spdk_bdev 00:04:29.236 LINK hello_fsdev 00:04:29.236 LINK led 00:04:29.495 CXX test/cpp_headers/bit_pool.o 00:04:29.495 LINK idxd_perf 00:04:29.495 LINK vhost_fuzz 00:04:29.495 CC examples/accel/perf/accel_perf.o 00:04:29.495 CC test/env/mem_callbacks/mem_callbacks.o 00:04:29.495 CC examples/blob/hello_world/hello_blob.o 00:04:29.495 CXX test/cpp_headers/blob_bdev.o 00:04:29.495 CC examples/blob/cli/blobcli.o 00:04:29.495 CC test/app/stub/stub.o 00:04:29.754 CC test/rpc_client/rpc_client_test.o 00:04:29.754 CC test/event/event_perf/event_perf.o 00:04:29.754 CXX test/cpp_headers/blobfs_bdev.o 00:04:29.754 CC test/nvme/aer/aer.o 00:04:29.754 LINK hello_blob 00:04:29.754 LINK stub 00:04:29.754 LINK event_perf 00:04:29.754 LINK rpc_client_test 00:04:29.754 CXX test/cpp_headers/blobfs.o 00:04:30.013 LINK aer 00:04:30.013 CXX test/cpp_headers/blob.o 00:04:30.013 LINK mem_callbacks 00:04:30.013 CC test/env/vtophys/vtophys.o 00:04:30.013 CC test/event/reactor/reactor.o 00:04:30.013 LINK accel_perf 00:04:30.013 LINK blobcli 00:04:30.272 CC test/accel/dif/dif.o 00:04:30.272 CXX test/cpp_headers/conf.o 00:04:30.272 LINK vtophys 00:04:30.272 LINK reactor 00:04:30.272 CC test/blobfs/mkfs/mkfs.o 00:04:30.272 CC test/nvme/reset/reset.o 00:04:30.272 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:30.272 CXX test/cpp_headers/config.o 00:04:30.272 CXX test/cpp_headers/cpuset.o 00:04:30.272 LINK iscsi_fuzz 00:04:30.532 LINK mkfs 00:04:30.532 CC test/lvol/esnap/esnap.o 00:04:30.532 CC examples/nvme/reconnect/reconnect.o 00:04:30.532 CC examples/nvme/hello_world/hello_world.o 00:04:30.532 CC test/event/reactor_perf/reactor_perf.o 00:04:30.532 LINK env_dpdk_post_init 00:04:30.532 LINK reset 00:04:30.532 CXX test/cpp_headers/crc16.o 00:04:30.532 CXX test/cpp_headers/crc32.o 00:04:30.532 LINK reactor_perf 00:04:30.532 CXX test/cpp_headers/crc64.o 00:04:30.790 LINK hello_world 00:04:30.790 CC test/env/memory/memory_ut.o 00:04:30.790 CXX test/cpp_headers/dif.o 00:04:30.790 CC test/nvme/sgl/sgl.o 00:04:30.790 LINK reconnect 00:04:30.790 CC test/env/pci/pci_ut.o 00:04:30.790 CC test/event/app_repeat/app_repeat.o 00:04:30.790 CC test/event/scheduler/scheduler.o 00:04:30.790 LINK dif 00:04:31.049 CXX test/cpp_headers/dma.o 00:04:31.049 CC test/nvme/e2edp/nvme_dp.o 00:04:31.049 LINK app_repeat 00:04:31.049 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:31.049 CXX test/cpp_headers/endian.o 00:04:31.049 LINK scheduler 00:04:31.049 LINK sgl 00:04:31.049 CXX test/cpp_headers/env_dpdk.o 00:04:31.308 LINK nvme_dp 00:04:31.308 CXX test/cpp_headers/env.o 00:04:31.308 LINK pci_ut 00:04:31.308 CXX test/cpp_headers/event.o 00:04:31.308 CC test/bdev/bdevio/bdevio.o 00:04:31.308 CC examples/nvme/arbitration/arbitration.o 00:04:31.308 CC examples/nvme/hotplug/hotplug.o 00:04:31.566 CXX test/cpp_headers/fd_group.o 00:04:31.566 CXX test/cpp_headers/fd.o 00:04:31.566 CC test/nvme/overhead/overhead.o 00:04:31.566 CC test/nvme/err_injection/err_injection.o 00:04:31.566 LINK nvme_manage 00:04:31.566 LINK hotplug 00:04:31.566 CXX test/cpp_headers/file.o 00:04:31.566 LINK arbitration 00:04:31.566 LINK bdevio 00:04:31.566 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:31.824 LINK err_injection 00:04:31.824 CC examples/nvme/abort/abort.o 00:04:31.824 LINK overhead 00:04:31.824 CXX test/cpp_headers/fsdev.o 00:04:31.824 CXX test/cpp_headers/fsdev_module.o 00:04:31.824 CXX test/cpp_headers/ftl.o 00:04:31.824 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:31.824 LINK cmb_copy 00:04:31.824 LINK memory_ut 00:04:32.084 CC test/nvme/startup/startup.o 00:04:32.084 LINK pmr_persistence 00:04:32.084 CXX test/cpp_headers/fuse_dispatcher.o 00:04:32.084 CC test/nvme/reserve/reserve.o 00:04:32.084 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.084 CC test/nvme/simple_copy/simple_copy.o 00:04:32.084 CC examples/bdev/bdevperf/bdevperf.o 00:04:32.084 LINK abort 00:04:32.084 CXX test/cpp_headers/gpt_spec.o 00:04:32.084 CC test/nvme/connect_stress/connect_stress.o 00:04:32.084 CXX test/cpp_headers/hexlify.o 00:04:32.344 LINK startup 00:04:32.344 LINK reserve 00:04:32.344 LINK hello_bdev 00:04:32.344 CXX test/cpp_headers/histogram_data.o 00:04:32.344 LINK simple_copy 00:04:32.344 LINK connect_stress 00:04:32.344 CC test/nvme/boot_partition/boot_partition.o 00:04:32.344 CC test/nvme/fused_ordering/fused_ordering.o 00:04:32.344 CC test/nvme/compliance/nvme_compliance.o 00:04:32.604 CXX test/cpp_headers/idxd.o 00:04:32.604 CXX test/cpp_headers/idxd_spec.o 00:04:32.604 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:32.604 CXX test/cpp_headers/init.o 00:04:32.604 LINK boot_partition 00:04:32.604 LINK fused_ordering 00:04:32.604 CXX test/cpp_headers/ioat.o 00:04:32.604 CC test/nvme/fdp/fdp.o 00:04:32.604 CXX test/cpp_headers/ioat_spec.o 00:04:32.864 LINK doorbell_aers 00:04:32.864 CC test/nvme/cuse/cuse.o 00:04:32.864 CXX test/cpp_headers/iscsi_spec.o 00:04:32.864 LINK nvme_compliance 00:04:32.864 CXX test/cpp_headers/json.o 00:04:32.864 CXX test/cpp_headers/jsonrpc.o 00:04:32.864 CXX test/cpp_headers/keyring.o 00:04:32.864 CXX test/cpp_headers/keyring_module.o 00:04:32.864 CXX test/cpp_headers/likely.o 00:04:32.864 CXX test/cpp_headers/log.o 00:04:32.864 LINK bdevperf 00:04:32.864 CXX test/cpp_headers/lvol.o 00:04:32.864 CXX test/cpp_headers/md5.o 00:04:33.123 CXX test/cpp_headers/memory.o 00:04:33.123 LINK fdp 00:04:33.123 CXX test/cpp_headers/mmio.o 00:04:33.123 CXX test/cpp_headers/nbd.o 00:04:33.123 CXX test/cpp_headers/net.o 00:04:33.123 CXX test/cpp_headers/notify.o 00:04:33.123 CXX test/cpp_headers/nvme.o 00:04:33.123 CXX test/cpp_headers/nvme_intel.o 00:04:33.123 CXX test/cpp_headers/nvme_ocssd.o 00:04:33.123 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:33.123 CXX test/cpp_headers/nvme_spec.o 00:04:33.123 CXX test/cpp_headers/nvme_zns.o 00:04:33.382 CXX test/cpp_headers/nvmf_cmd.o 00:04:33.382 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:33.382 CXX test/cpp_headers/nvmf.o 00:04:33.382 CC examples/nvmf/nvmf/nvmf.o 00:04:33.382 CXX test/cpp_headers/nvmf_spec.o 00:04:33.382 CXX test/cpp_headers/nvmf_transport.o 00:04:33.382 CXX test/cpp_headers/opal.o 00:04:33.382 CXX test/cpp_headers/opal_spec.o 00:04:33.382 CXX test/cpp_headers/pci_ids.o 00:04:33.382 CXX test/cpp_headers/pipe.o 00:04:33.645 CXX test/cpp_headers/queue.o 00:04:33.645 CXX test/cpp_headers/reduce.o 00:04:33.645 CXX test/cpp_headers/rpc.o 00:04:33.645 CXX test/cpp_headers/scheduler.o 00:04:33.645 CXX test/cpp_headers/scsi.o 00:04:33.645 CXX test/cpp_headers/scsi_spec.o 00:04:33.645 CXX test/cpp_headers/sock.o 00:04:33.645 LINK nvmf 00:04:33.645 CXX test/cpp_headers/stdinc.o 00:04:33.645 CXX test/cpp_headers/string.o 00:04:33.645 CXX test/cpp_headers/thread.o 00:04:33.645 CXX test/cpp_headers/trace.o 00:04:33.645 CXX test/cpp_headers/trace_parser.o 00:04:33.905 CXX test/cpp_headers/tree.o 00:04:33.905 CXX test/cpp_headers/ublk.o 00:04:33.905 CXX test/cpp_headers/util.o 00:04:33.905 CXX test/cpp_headers/uuid.o 00:04:33.905 CXX test/cpp_headers/version.o 00:04:33.905 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.905 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.905 CXX test/cpp_headers/vhost.o 00:04:33.905 CXX test/cpp_headers/vmd.o 00:04:33.905 CXX test/cpp_headers/xor.o 00:04:33.905 CXX test/cpp_headers/zipf.o 00:04:34.165 LINK cuse 00:04:36.074 LINK esnap 00:04:36.333 00:04:36.333 real 1m21.087s 00:04:36.333 user 6m53.457s 00:04:36.333 sys 1m56.182s 00:04:36.333 10:16:35 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:36.333 10:16:35 make -- common/autotest_common.sh@10 -- $ set +x 00:04:36.333 ************************************ 00:04:36.333 END TEST make 00:04:36.333 ************************************ 00:04:36.593 10:16:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:36.593 10:16:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:36.593 10:16:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:36.593 10:16:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.593 10:16:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:36.593 10:16:35 -- pm/common@44 -- $ pid=5293 00:04:36.593 10:16:35 -- pm/common@50 -- $ kill -TERM 5293 00:04:36.593 10:16:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.593 10:16:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:36.593 10:16:35 -- pm/common@44 -- $ pid=5295 00:04:36.593 10:16:35 -- pm/common@50 -- $ kill -TERM 5295 00:04:36.593 10:16:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:36.593 10:16:35 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:36.593 10:16:35 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.593 10:16:35 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.593 10:16:35 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.593 10:16:35 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.593 10:16:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.593 10:16:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.593 10:16:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.593 10:16:35 -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.593 10:16:35 -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.593 10:16:35 -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.593 10:16:35 -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.593 10:16:35 -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.593 10:16:35 -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.593 10:16:35 -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.593 10:16:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.593 10:16:35 -- scripts/common.sh@344 -- # case "$op" in 00:04:36.593 10:16:35 -- scripts/common.sh@345 -- # : 1 00:04:36.593 10:16:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.593 10:16:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.593 10:16:35 -- scripts/common.sh@365 -- # decimal 1 00:04:36.903 10:16:35 -- scripts/common.sh@353 -- # local d=1 00:04:36.903 10:16:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.903 10:16:35 -- scripts/common.sh@355 -- # echo 1 00:04:36.903 10:16:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.903 10:16:35 -- scripts/common.sh@366 -- # decimal 2 00:04:36.903 10:16:35 -- scripts/common.sh@353 -- # local d=2 00:04:36.903 10:16:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.903 10:16:35 -- scripts/common.sh@355 -- # echo 2 00:04:36.903 10:16:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.903 10:16:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.903 10:16:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.903 10:16:35 -- scripts/common.sh@368 -- # return 0 00:04:36.903 10:16:35 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.903 10:16:35 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.903 --rc genhtml_branch_coverage=1 00:04:36.903 --rc genhtml_function_coverage=1 00:04:36.903 --rc genhtml_legend=1 00:04:36.903 --rc geninfo_all_blocks=1 00:04:36.903 --rc geninfo_unexecuted_blocks=1 00:04:36.903 00:04:36.903 ' 00:04:36.903 10:16:35 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.903 --rc genhtml_branch_coverage=1 00:04:36.903 --rc genhtml_function_coverage=1 00:04:36.903 --rc genhtml_legend=1 00:04:36.903 --rc geninfo_all_blocks=1 00:04:36.903 --rc geninfo_unexecuted_blocks=1 00:04:36.903 00:04:36.903 ' 00:04:36.903 10:16:35 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.903 --rc genhtml_branch_coverage=1 00:04:36.903 --rc genhtml_function_coverage=1 00:04:36.903 --rc genhtml_legend=1 00:04:36.903 --rc geninfo_all_blocks=1 00:04:36.903 --rc geninfo_unexecuted_blocks=1 00:04:36.903 00:04:36.903 ' 00:04:36.903 10:16:35 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.903 --rc genhtml_branch_coverage=1 00:04:36.903 --rc genhtml_function_coverage=1 00:04:36.903 --rc genhtml_legend=1 00:04:36.903 --rc geninfo_all_blocks=1 00:04:36.903 --rc geninfo_unexecuted_blocks=1 00:04:36.903 00:04:36.903 ' 00:04:36.903 10:16:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.903 10:16:35 -- nvmf/common.sh@7 -- # uname -s 00:04:36.903 10:16:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.903 10:16:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.903 10:16:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.903 10:16:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.903 10:16:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.903 10:16:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.903 10:16:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.903 10:16:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.903 10:16:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.903 10:16:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.903 10:16:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b25f9a59-3323-475f-a653-2ff14ee861c0 00:04:36.903 10:16:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=b25f9a59-3323-475f-a653-2ff14ee861c0 00:04:36.903 10:16:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.903 10:16:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.903 10:16:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.903 10:16:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.903 10:16:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.903 10:16:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.903 10:16:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.903 10:16:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.903 10:16:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.903 10:16:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.903 10:16:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.903 10:16:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.903 10:16:36 -- paths/export.sh@5 -- # export PATH 00:04:36.903 10:16:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.903 10:16:36 -- nvmf/common.sh@51 -- # : 0 00:04:36.903 10:16:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.903 10:16:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.903 10:16:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.903 10:16:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.903 10:16:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.903 10:16:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.903 10:16:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.903 10:16:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.903 10:16:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.903 10:16:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:36.903 10:16:36 -- spdk/autotest.sh@32 -- # uname -s 00:04:36.903 10:16:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:36.903 10:16:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:36.903 10:16:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:36.903 10:16:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:36.903 10:16:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:36.903 10:16:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:36.903 10:16:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:36.903 10:16:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:36.903 10:16:36 -- spdk/autotest.sh@48 -- # udevadm_pid=54739 00:04:36.903 10:16:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:36.903 10:16:36 -- pm/common@17 -- # local monitor 00:04:36.903 10:16:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.903 10:16:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:36.903 10:16:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.903 10:16:36 -- pm/common@25 -- # sleep 1 00:04:36.903 10:16:36 -- pm/common@21 -- # date +%s 00:04:36.903 10:16:36 -- pm/common@21 -- # date +%s 00:04:36.903 10:16:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733566596 00:04:36.903 10:16:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733566596 00:04:36.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733566596_collect-cpu-load.pm.log 00:04:36.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733566596_collect-vmstat.pm.log 00:04:37.897 10:16:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:37.897 10:16:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:37.897 10:16:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.897 10:16:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.897 10:16:37 -- spdk/autotest.sh@59 -- # create_test_list 00:04:37.897 10:16:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:37.897 10:16:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.897 10:16:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:37.897 10:16:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:37.897 10:16:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:37.897 10:16:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:37.897 10:16:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:37.897 10:16:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:37.897 10:16:37 -- common/autotest_common.sh@1457 -- # uname 00:04:37.897 10:16:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:37.897 10:16:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:37.897 10:16:37 -- common/autotest_common.sh@1477 -- # uname 00:04:37.897 10:16:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:37.897 10:16:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:37.897 10:16:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:38.156 lcov: LCOV version 1.15 00:04:38.156 10:16:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:56.251 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:56.251 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.139 10:17:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:11.139 10:17:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.139 10:17:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.139 10:17:08 -- spdk/autotest.sh@78 -- # rm -f 00:05:11.139 10:17:08 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.139 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:11.139 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:11.139 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:11.139 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:11.139 10:17:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:11.139 10:17:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:11.139 10:17:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:11.139 10:17:10 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:11.139 10:17:10 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:11.139 10:17:10 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:11.139 10:17:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:11.139 10:17:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:11.139 10:17:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:11.139 10:17:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:11.139 10:17:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:11.139 10:17:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:11.139 10:17:10 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:11.139 10:17:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:11.139 10:17:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:11.139 10:17:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:11.139 10:17:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:11.139 10:17:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.139 10:17:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:11.139 10:17:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.139 10:17:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.139 10:17:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:11.139 10:17:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:11.139 10:17:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.139 No valid GPT data, bailing 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # pt= 00:05:11.139 10:17:10 -- scripts/common.sh@395 -- # return 1 00:05:11.139 10:17:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.139 1+0 records in 00:05:11.139 1+0 records out 00:05:11.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203849 s, 51.4 MB/s 00:05:11.139 10:17:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.139 10:17:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.139 10:17:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:11.139 10:17:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:11.139 10:17:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:11.139 No valid GPT data, bailing 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # pt= 00:05:11.139 10:17:10 -- scripts/common.sh@395 -- # return 1 00:05:11.139 10:17:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:11.139 1+0 records in 00:05:11.139 1+0 records out 00:05:11.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676025 s, 155 MB/s 00:05:11.139 10:17:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.139 10:17:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.139 10:17:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:11.139 10:17:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:11.139 10:17:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:11.139 No valid GPT data, bailing 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # pt= 00:05:11.139 10:17:10 -- scripts/common.sh@395 -- # return 1 00:05:11.139 10:17:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:11.139 1+0 records in 00:05:11.139 1+0 records out 00:05:11.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658515 s, 159 MB/s 00:05:11.139 10:17:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.139 10:17:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.139 10:17:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:11.139 10:17:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:11.139 10:17:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:11.139 No valid GPT data, bailing 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:11.139 10:17:10 -- scripts/common.sh@394 -- # pt= 00:05:11.139 10:17:10 -- scripts/common.sh@395 -- # return 1 00:05:11.139 10:17:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:11.139 1+0 records in 00:05:11.139 1+0 records out 00:05:11.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613249 s, 171 MB/s 00:05:11.139 10:17:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.139 10:17:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.139 10:17:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:11.139 10:17:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:11.139 10:17:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:11.139 No valid GPT data, bailing 00:05:11.140 10:17:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:11.140 10:17:10 -- scripts/common.sh@394 -- # pt= 00:05:11.140 10:17:10 -- scripts/common.sh@395 -- # return 1 00:05:11.140 10:17:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:11.140 1+0 records in 00:05:11.140 1+0 records out 00:05:11.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669053 s, 157 MB/s 00:05:11.140 10:17:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.140 10:17:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.140 10:17:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:11.140 10:17:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:11.140 10:17:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:11.399 No valid GPT data, bailing 00:05:11.399 10:17:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:11.399 10:17:10 -- scripts/common.sh@394 -- # pt= 00:05:11.399 10:17:10 -- scripts/common.sh@395 -- # return 1 00:05:11.399 10:17:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:11.399 1+0 records in 00:05:11.399 1+0 records out 00:05:11.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647005 s, 162 MB/s 00:05:11.399 10:17:10 -- spdk/autotest.sh@105 -- # sync 00:05:11.399 10:17:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.399 10:17:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.399 10:17:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:14.691 10:17:13 -- spdk/autotest.sh@111 -- # uname -s 00:05:14.691 10:17:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:14.691 10:17:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:14.691 10:17:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:15.260 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.841 Hugepages 00:05:15.841 node hugesize free / total 00:05:15.841 node0 1048576kB 0 / 0 00:05:15.841 node0 2048kB 0 / 0 00:05:15.841 00:05:15.841 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.841 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:16.100 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:16.100 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:16.359 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:16.359 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:16.359 10:17:15 -- spdk/autotest.sh@117 -- # uname -s 00:05:16.359 10:17:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:16.359 10:17:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:16.359 10:17:15 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.867 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.867 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.867 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.867 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.126 10:17:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:19.062 10:17:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:19.062 10:17:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:19.062 10:17:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.062 10:17:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:19.062 10:17:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.062 10:17:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.062 10:17:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.062 10:17:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:19.062 10:17:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.321 10:17:18 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:19.321 10:17:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:19.321 10:17:18 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.146 Waiting for block devices as requested 00:05:20.146 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.146 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.405 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.405 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.679 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:25.679 10:17:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.679 10:17:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.679 10:17:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.679 10:17:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.679 10:17:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.679 10:17:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:25.679 10:17:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:25.679 10:17:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.679 10:17:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.679 10:17:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.679 10:17:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1543 -- # continue 00:05:25.679 10:17:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.679 10:17:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.679 10:17:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:25.679 10:17:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.679 10:17:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.679 10:17:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.679 10:17:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.679 10:17:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.680 10:17:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1543 -- # continue 00:05:25.680 10:17:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.680 10:17:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:25.680 10:17:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:25.680 10:17:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.680 10:17:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.680 10:17:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.680 10:17:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.680 10:17:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.680 10:17:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1543 -- # continue 00:05:25.680 10:17:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.680 10:17:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:25.680 10:17:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:25.680 10:17:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.680 10:17:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.680 10:17:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.680 10:17:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.680 10:17:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.680 10:17:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:25.680 10:17:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.680 10:17:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.680 10:17:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.680 10:17:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.680 10:17:25 -- common/autotest_common.sh@1543 -- # continue 00:05:25.680 10:17:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.680 10:17:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.680 10:17:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.938 10:17:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.938 10:17:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.938 10:17:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.938 10:17:25 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.443 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.443 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.443 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.443 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.443 10:17:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:27.443 10:17:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.443 10:17:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.701 10:17:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:27.701 10:17:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:27.701 10:17:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.701 10:17:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:27.701 10:17:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:27.701 10:17:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:27.701 10:17:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:27.701 10:17:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:27.701 10:17:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:27.701 10:17:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:27.701 10:17:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.701 10:17:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.702 10:17:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:27.702 10:17:26 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:27.702 10:17:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:27.702 10:17:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:27.702 10:17:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.702 10:17:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:27.702 10:17:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.702 10:17:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:27.702 10:17:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.702 10:17:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:27.702 10:17:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:27.702 10:17:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.702 10:17:26 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:27.702 10:17:26 -- common/autotest_common.sh@1572 -- # return 0 00:05:27.702 10:17:26 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:27.702 10:17:26 -- common/autotest_common.sh@1580 -- # return 0 00:05:27.702 10:17:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:27.702 10:17:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:27.702 10:17:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.702 10:17:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.702 10:17:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:27.702 10:17:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.702 10:17:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.702 10:17:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:27.702 10:17:26 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.702 10:17:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.702 10:17:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.702 10:17:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.702 ************************************ 00:05:27.702 START TEST env 00:05:27.702 ************************************ 00:05:27.702 10:17:27 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.961 * Looking for test storage... 00:05:27.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.961 10:17:27 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.961 10:17:27 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.961 10:17:27 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.961 10:17:27 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.961 10:17:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.961 10:17:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.961 10:17:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.961 10:17:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.961 10:17:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.961 10:17:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.961 10:17:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.961 10:17:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.961 10:17:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.961 10:17:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.961 10:17:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.961 10:17:27 env -- scripts/common.sh@344 -- # case "$op" in 00:05:27.961 10:17:27 env -- scripts/common.sh@345 -- # : 1 00:05:27.961 10:17:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.961 10:17:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.961 10:17:27 env -- scripts/common.sh@365 -- # decimal 1 00:05:27.961 10:17:27 env -- scripts/common.sh@353 -- # local d=1 00:05:27.961 10:17:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.961 10:17:27 env -- scripts/common.sh@355 -- # echo 1 00:05:27.961 10:17:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.961 10:17:27 env -- scripts/common.sh@366 -- # decimal 2 00:05:27.961 10:17:27 env -- scripts/common.sh@353 -- # local d=2 00:05:27.961 10:17:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.961 10:17:27 env -- scripts/common.sh@355 -- # echo 2 00:05:27.962 10:17:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.962 10:17:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.962 10:17:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.962 10:17:27 env -- scripts/common.sh@368 -- # return 0 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.962 --rc genhtml_branch_coverage=1 00:05:27.962 --rc genhtml_function_coverage=1 00:05:27.962 --rc genhtml_legend=1 00:05:27.962 --rc geninfo_all_blocks=1 00:05:27.962 --rc geninfo_unexecuted_blocks=1 00:05:27.962 00:05:27.962 ' 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.962 --rc genhtml_branch_coverage=1 00:05:27.962 --rc genhtml_function_coverage=1 00:05:27.962 --rc genhtml_legend=1 00:05:27.962 --rc geninfo_all_blocks=1 00:05:27.962 --rc geninfo_unexecuted_blocks=1 00:05:27.962 00:05:27.962 ' 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.962 --rc genhtml_branch_coverage=1 00:05:27.962 --rc genhtml_function_coverage=1 00:05:27.962 --rc genhtml_legend=1 00:05:27.962 --rc geninfo_all_blocks=1 00:05:27.962 --rc geninfo_unexecuted_blocks=1 00:05:27.962 00:05:27.962 ' 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.962 --rc genhtml_branch_coverage=1 00:05:27.962 --rc genhtml_function_coverage=1 00:05:27.962 --rc genhtml_legend=1 00:05:27.962 --rc geninfo_all_blocks=1 00:05:27.962 --rc geninfo_unexecuted_blocks=1 00:05:27.962 00:05:27.962 ' 00:05:27.962 10:17:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.962 10:17:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.962 10:17:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 ************************************ 00:05:27.962 START TEST env_memory 00:05:27.962 ************************************ 00:05:27.962 10:17:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.962 00:05:27.962 00:05:27.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.962 http://cunit.sourceforge.net/ 00:05:27.962 00:05:27.962 00:05:27.962 Suite: memory 00:05:28.221 Test: alloc and free memory map ...[2024-12-07 10:17:27.323007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:28.221 passed 00:05:28.221 Test: mem map translation ...[2024-12-07 10:17:27.366827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:28.221 [2024-12-07 10:17:27.366870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:28.221 [2024-12-07 10:17:27.366949] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:28.221 [2024-12-07 10:17:27.366972] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:28.221 passed 00:05:28.221 Test: mem map registration ...[2024-12-07 10:17:27.430133] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:28.221 [2024-12-07 10:17:27.430182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:28.221 passed 00:05:28.221 Test: mem map adjacent registrations ...passed 00:05:28.221 00:05:28.221 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.221 suites 1 1 n/a 0 0 00:05:28.221 tests 4 4 4 0 0 00:05:28.221 asserts 152 152 152 0 n/a 00:05:28.221 00:05:28.221 Elapsed time = 0.231 seconds 00:05:28.221 00:05:28.221 real 0m0.287s 00:05:28.221 user 0m0.246s 00:05:28.221 sys 0m0.030s 00:05:28.221 10:17:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.221 10:17:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:28.221 ************************************ 00:05:28.221 END TEST env_memory 00:05:28.221 ************************************ 00:05:28.482 10:17:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.482 10:17:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.482 10:17:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.482 10:17:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.482 ************************************ 00:05:28.482 START TEST env_vtophys 00:05:28.482 ************************************ 00:05:28.482 10:17:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.482 EAL: lib.eal log level changed from notice to debug 00:05:28.482 EAL: Detected lcore 0 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 1 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 2 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 3 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 4 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 5 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 6 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 7 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 8 as core 0 on socket 0 00:05:28.482 EAL: Detected lcore 9 as core 0 on socket 0 00:05:28.482 EAL: Maximum logical cores by configuration: 128 00:05:28.482 EAL: Detected CPU lcores: 10 00:05:28.482 EAL: Detected NUMA nodes: 1 00:05:28.482 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:28.482 EAL: Detected shared linkage of DPDK 00:05:28.482 EAL: No shared files mode enabled, IPC will be disabled 00:05:28.482 EAL: Selected IOVA mode 'PA' 00:05:28.482 EAL: Probing VFIO support... 00:05:28.482 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.482 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:28.482 EAL: Ask a virtual area of 0x2e000 bytes 00:05:28.482 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:28.482 EAL: Setting up physically contiguous memory... 00:05:28.482 EAL: Setting maximum number of open files to 524288 00:05:28.482 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:28.482 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:28.482 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.482 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:28.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.482 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.482 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:28.482 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:28.482 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.482 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:28.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.482 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.482 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:28.482 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:28.482 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.482 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:28.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.482 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.482 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:28.482 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:28.482 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.482 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:28.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.482 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.482 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:28.482 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:28.482 EAL: Hugepages will be freed exactly as allocated. 00:05:28.482 EAL: No shared files mode enabled, IPC is disabled 00:05:28.482 EAL: No shared files mode enabled, IPC is disabled 00:05:28.482 EAL: TSC frequency is ~2490000 KHz 00:05:28.482 EAL: Main lcore 0 is ready (tid=7f8462f0ca40;cpuset=[0]) 00:05:28.482 EAL: Trying to obtain current memory policy. 00:05:28.482 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.482 EAL: Restoring previous memory policy: 0 00:05:28.482 EAL: request: mp_malloc_sync 00:05:28.482 EAL: No shared files mode enabled, IPC is disabled 00:05:28.482 EAL: Heap on socket 0 was expanded by 2MB 00:05:28.482 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.482 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.482 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.482 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:28.742 00:05:28.742 00:05:28.742 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.742 http://cunit.sourceforge.net/ 00:05:28.742 00:05:28.742 00:05:28.742 Suite: components_suite 00:05:29.002 Test: vtophys_malloc_test ...passed 00:05:29.002 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:29.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.002 EAL: Restoring previous memory policy: 4 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was expanded by 4MB 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was shrunk by 4MB 00:05:29.002 EAL: Trying to obtain current memory policy. 00:05:29.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.002 EAL: Restoring previous memory policy: 4 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was expanded by 6MB 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was shrunk by 6MB 00:05:29.002 EAL: Trying to obtain current memory policy. 00:05:29.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.002 EAL: Restoring previous memory policy: 4 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was expanded by 10MB 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was shrunk by 10MB 00:05:29.002 EAL: Trying to obtain current memory policy. 00:05:29.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.002 EAL: Restoring previous memory policy: 4 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was expanded by 18MB 00:05:29.002 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.002 EAL: request: mp_malloc_sync 00:05:29.002 EAL: No shared files mode enabled, IPC is disabled 00:05:29.002 EAL: Heap on socket 0 was shrunk by 18MB 00:05:29.261 EAL: Trying to obtain current memory policy. 00:05:29.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.261 EAL: Restoring previous memory policy: 4 00:05:29.261 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.261 EAL: request: mp_malloc_sync 00:05:29.261 EAL: No shared files mode enabled, IPC is disabled 00:05:29.261 EAL: Heap on socket 0 was expanded by 34MB 00:05:29.261 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.261 EAL: request: mp_malloc_sync 00:05:29.261 EAL: No shared files mode enabled, IPC is disabled 00:05:29.261 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.261 EAL: Trying to obtain current memory policy. 00:05:29.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.261 EAL: Restoring previous memory policy: 4 00:05:29.261 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.261 EAL: request: mp_malloc_sync 00:05:29.261 EAL: No shared files mode enabled, IPC is disabled 00:05:29.261 EAL: Heap on socket 0 was expanded by 66MB 00:05:29.261 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.261 EAL: request: mp_malloc_sync 00:05:29.261 EAL: No shared files mode enabled, IPC is disabled 00:05:29.261 EAL: Heap on socket 0 was shrunk by 66MB 00:05:29.521 EAL: Trying to obtain current memory policy. 00:05:29.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.521 EAL: Restoring previous memory policy: 4 00:05:29.521 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.521 EAL: request: mp_malloc_sync 00:05:29.521 EAL: No shared files mode enabled, IPC is disabled 00:05:29.521 EAL: Heap on socket 0 was expanded by 130MB 00:05:29.780 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.780 EAL: request: mp_malloc_sync 00:05:29.781 EAL: No shared files mode enabled, IPC is disabled 00:05:29.781 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.068 EAL: Trying to obtain current memory policy. 00:05:30.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.068 EAL: Restoring previous memory policy: 4 00:05:30.068 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.068 EAL: request: mp_malloc_sync 00:05:30.068 EAL: No shared files mode enabled, IPC is disabled 00:05:30.068 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.328 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.588 EAL: request: mp_malloc_sync 00:05:30.588 EAL: No shared files mode enabled, IPC is disabled 00:05:30.588 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.848 EAL: Trying to obtain current memory policy. 00:05:30.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.107 EAL: Restoring previous memory policy: 4 00:05:31.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.107 EAL: request: mp_malloc_sync 00:05:31.107 EAL: No shared files mode enabled, IPC is disabled 00:05:31.107 EAL: Heap on socket 0 was expanded by 514MB 00:05:32.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.044 EAL: request: mp_malloc_sync 00:05:32.044 EAL: No shared files mode enabled, IPC is disabled 00:05:32.044 EAL: Heap on socket 0 was shrunk by 514MB 00:05:32.612 EAL: Trying to obtain current memory policy. 00:05:32.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.871 EAL: Restoring previous memory policy: 4 00:05:32.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.871 EAL: request: mp_malloc_sync 00:05:32.871 EAL: No shared files mode enabled, IPC is disabled 00:05:32.871 EAL: Heap on socket 0 was expanded by 1026MB 00:05:34.777 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.777 EAL: request: mp_malloc_sync 00:05:34.777 EAL: No shared files mode enabled, IPC is disabled 00:05:34.777 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.678 passed 00:05:36.678 00:05:36.678 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.679 suites 1 1 n/a 0 0 00:05:36.679 tests 2 2 2 0 0 00:05:36.679 asserts 5796 5796 5796 0 n/a 00:05:36.679 00:05:36.679 Elapsed time = 7.738 seconds 00:05:36.679 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.679 EAL: request: mp_malloc_sync 00:05:36.679 EAL: No shared files mode enabled, IPC is disabled 00:05:36.679 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.679 EAL: No shared files mode enabled, IPC is disabled 00:05:36.679 EAL: No shared files mode enabled, IPC is disabled 00:05:36.679 EAL: No shared files mode enabled, IPC is disabled 00:05:36.679 00:05:36.679 real 0m8.083s 00:05:36.679 user 0m7.102s 00:05:36.679 sys 0m0.827s 00:05:36.679 10:17:35 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.679 ************************************ 00:05:36.679 END TEST env_vtophys 00:05:36.679 ************************************ 00:05:36.679 10:17:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.679 10:17:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:36.679 10:17:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.679 10:17:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.679 10:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.679 ************************************ 00:05:36.679 START TEST env_pci 00:05:36.679 ************************************ 00:05:36.679 10:17:35 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:36.679 00:05:36.679 00:05:36.679 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.679 http://cunit.sourceforge.net/ 00:05:36.679 00:05:36.679 00:05:36.679 Suite: pci 00:05:36.679 Test: pci_hook ...[2024-12-07 10:17:35.808295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57607 has claimed it 00:05:36.679 passed 00:05:36.679 00:05:36.679 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.679 suites 1 1 n/a 0 0 00:05:36.679 tests 1 1 1 0 0 00:05:36.679 asserts 25 25 25 0 n/a 00:05:36.679 00:05:36.679 Elapsed time = 0.010 seconds 00:05:36.679 EAL: Cannot find device (10000:00:01.0) 00:05:36.679 EAL: Failed to attach device on primary process 00:05:36.679 00:05:36.679 real 0m0.114s 00:05:36.679 user 0m0.046s 00:05:36.679 sys 0m0.067s 00:05:36.679 10:17:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.679 ************************************ 00:05:36.679 END TEST env_pci 00:05:36.679 ************************************ 00:05:36.679 10:17:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.679 10:17:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.679 10:17:35 env -- env/env.sh@15 -- # uname 00:05:36.679 10:17:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.679 10:17:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.679 10:17:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.679 10:17:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:36.679 10:17:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.679 10:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.679 ************************************ 00:05:36.679 START TEST env_dpdk_post_init 00:05:36.679 ************************************ 00:05:36.679 10:17:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.953 EAL: Detected CPU lcores: 10 00:05:36.953 EAL: Detected NUMA nodes: 1 00:05:36.953 EAL: Detected shared linkage of DPDK 00:05:36.953 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.953 EAL: Selected IOVA mode 'PA' 00:05:36.953 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:36.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:36.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:36.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:36.953 Starting DPDK initialization... 00:05:36.953 Starting SPDK post initialization... 00:05:36.953 SPDK NVMe probe 00:05:36.953 Attaching to 0000:00:10.0 00:05:36.953 Attaching to 0000:00:11.0 00:05:36.953 Attaching to 0000:00:12.0 00:05:36.953 Attaching to 0000:00:13.0 00:05:36.953 Attached to 0000:00:10.0 00:05:36.953 Attached to 0000:00:11.0 00:05:36.953 Attached to 0000:00:13.0 00:05:36.953 Attached to 0000:00:12.0 00:05:36.953 Cleaning up... 00:05:36.953 00:05:36.953 real 0m0.312s 00:05:36.953 user 0m0.106s 00:05:36.953 sys 0m0.109s 00:05:36.953 10:17:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.953 10:17:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.953 ************************************ 00:05:36.953 END TEST env_dpdk_post_init 00:05:36.953 ************************************ 00:05:37.213 10:17:36 env -- env/env.sh@26 -- # uname 00:05:37.213 10:17:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.213 10:17:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.213 10:17:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.213 10:17:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.213 10:17:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.213 ************************************ 00:05:37.213 START TEST env_mem_callbacks 00:05:37.213 ************************************ 00:05:37.213 10:17:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.213 EAL: Detected CPU lcores: 10 00:05:37.213 EAL: Detected NUMA nodes: 1 00:05:37.213 EAL: Detected shared linkage of DPDK 00:05:37.213 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.213 EAL: Selected IOVA mode 'PA' 00:05:37.213 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.213 00:05:37.213 00:05:37.213 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.213 http://cunit.sourceforge.net/ 00:05:37.213 00:05:37.213 00:05:37.213 Suite: memory 00:05:37.213 Test: test ... 00:05:37.213 register 0x200000200000 2097152 00:05:37.213 malloc 3145728 00:05:37.213 register 0x200000400000 4194304 00:05:37.472 buf 0x2000004fffc0 len 3145728 PASSED 00:05:37.472 malloc 64 00:05:37.472 buf 0x2000004ffec0 len 64 PASSED 00:05:37.472 malloc 4194304 00:05:37.472 register 0x200000800000 6291456 00:05:37.472 buf 0x2000009fffc0 len 4194304 PASSED 00:05:37.472 free 0x2000004fffc0 3145728 00:05:37.472 free 0x2000004ffec0 64 00:05:37.472 unregister 0x200000400000 4194304 PASSED 00:05:37.472 free 0x2000009fffc0 4194304 00:05:37.472 unregister 0x200000800000 6291456 PASSED 00:05:37.472 malloc 8388608 00:05:37.472 register 0x200000400000 10485760 00:05:37.472 buf 0x2000005fffc0 len 8388608 PASSED 00:05:37.472 free 0x2000005fffc0 8388608 00:05:37.472 unregister 0x200000400000 10485760 PASSED 00:05:37.472 passed 00:05:37.472 00:05:37.472 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.472 suites 1 1 n/a 0 0 00:05:37.472 tests 1 1 1 0 0 00:05:37.472 asserts 15 15 15 0 n/a 00:05:37.472 00:05:37.472 Elapsed time = 0.078 seconds 00:05:37.472 00:05:37.472 real 0m0.293s 00:05:37.472 user 0m0.109s 00:05:37.472 sys 0m0.080s 00:05:37.472 ************************************ 00:05:37.472 END TEST env_mem_callbacks 00:05:37.472 ************************************ 00:05:37.472 10:17:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.472 10:17:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:37.472 ************************************ 00:05:37.472 END TEST env 00:05:37.472 ************************************ 00:05:37.472 00:05:37.472 real 0m9.723s 00:05:37.472 user 0m7.846s 00:05:37.472 sys 0m1.506s 00:05:37.472 10:17:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.472 10:17:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.472 10:17:36 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:37.472 10:17:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.472 10:17:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.472 10:17:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.472 ************************************ 00:05:37.472 START TEST rpc 00:05:37.472 ************************************ 00:05:37.472 10:17:36 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:37.730 * Looking for test storage... 00:05:37.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.730 10:17:36 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.730 10:17:36 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.730 10:17:36 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.730 10:17:37 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.730 10:17:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.730 10:17:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.730 10:17:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.730 10:17:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.730 10:17:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.730 10:17:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.730 10:17:37 rpc -- scripts/common.sh@345 -- # : 1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.730 10:17:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.730 10:17:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.730 10:17:37 rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.730 10:17:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.730 10:17:37 rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.730 10:17:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.730 10:17:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.730 10:17:37 rpc -- scripts/common.sh@368 -- # return 0 00:05:37.730 10:17:37 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.731 --rc genhtml_branch_coverage=1 00:05:37.731 --rc genhtml_function_coverage=1 00:05:37.731 --rc genhtml_legend=1 00:05:37.731 --rc geninfo_all_blocks=1 00:05:37.731 --rc geninfo_unexecuted_blocks=1 00:05:37.731 00:05:37.731 ' 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.731 --rc genhtml_branch_coverage=1 00:05:37.731 --rc genhtml_function_coverage=1 00:05:37.731 --rc genhtml_legend=1 00:05:37.731 --rc geninfo_all_blocks=1 00:05:37.731 --rc geninfo_unexecuted_blocks=1 00:05:37.731 00:05:37.731 ' 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.731 --rc genhtml_branch_coverage=1 00:05:37.731 --rc genhtml_function_coverage=1 00:05:37.731 --rc genhtml_legend=1 00:05:37.731 --rc geninfo_all_blocks=1 00:05:37.731 --rc geninfo_unexecuted_blocks=1 00:05:37.731 00:05:37.731 ' 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.731 --rc genhtml_branch_coverage=1 00:05:37.731 --rc genhtml_function_coverage=1 00:05:37.731 --rc genhtml_legend=1 00:05:37.731 --rc geninfo_all_blocks=1 00:05:37.731 --rc geninfo_unexecuted_blocks=1 00:05:37.731 00:05:37.731 ' 00:05:37.731 10:17:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57734 00:05:37.731 10:17:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:37.731 10:17:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.731 10:17:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57734 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@835 -- # '[' -z 57734 ']' 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.731 10:17:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.989 [2024-12-07 10:17:37.165942] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:37.989 [2024-12-07 10:17:37.166245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57734 ] 00:05:38.247 [2024-12-07 10:17:37.348998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.247 [2024-12-07 10:17:37.454662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.247 [2024-12-07 10:17:37.454718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57734' to capture a snapshot of events at runtime. 00:05:38.247 [2024-12-07 10:17:37.454730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.247 [2024-12-07 10:17:37.454760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.247 [2024-12-07 10:17:37.454769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57734 for offline analysis/debug. 00:05:38.247 [2024-12-07 10:17:37.456105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.182 10:17:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.182 10:17:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.182 10:17:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.182 10:17:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.182 10:17:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.182 10:17:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.182 10:17:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.182 10:17:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.182 10:17:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 ************************************ 00:05:39.182 START TEST rpc_integrity 00:05:39.182 ************************************ 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.182 { 00:05:39.182 "name": "Malloc0", 00:05:39.182 "aliases": [ 00:05:39.182 "768256ab-4d12-4979-ac76-ac65421edced" 00:05:39.182 ], 00:05:39.182 "product_name": "Malloc disk", 00:05:39.182 "block_size": 512, 00:05:39.182 "num_blocks": 16384, 00:05:39.182 "uuid": "768256ab-4d12-4979-ac76-ac65421edced", 00:05:39.182 "assigned_rate_limits": { 00:05:39.182 "rw_ios_per_sec": 0, 00:05:39.182 "rw_mbytes_per_sec": 0, 00:05:39.182 "r_mbytes_per_sec": 0, 00:05:39.182 "w_mbytes_per_sec": 0 00:05:39.182 }, 00:05:39.182 "claimed": false, 00:05:39.182 "zoned": false, 00:05:39.182 "supported_io_types": { 00:05:39.182 "read": true, 00:05:39.182 "write": true, 00:05:39.182 "unmap": true, 00:05:39.182 "flush": true, 00:05:39.182 "reset": true, 00:05:39.182 "nvme_admin": false, 00:05:39.182 "nvme_io": false, 00:05:39.182 "nvme_io_md": false, 00:05:39.182 "write_zeroes": true, 00:05:39.182 "zcopy": true, 00:05:39.182 "get_zone_info": false, 00:05:39.182 "zone_management": false, 00:05:39.182 "zone_append": false, 00:05:39.182 "compare": false, 00:05:39.182 "compare_and_write": false, 00:05:39.182 "abort": true, 00:05:39.182 "seek_hole": false, 00:05:39.182 "seek_data": false, 00:05:39.182 "copy": true, 00:05:39.182 "nvme_iov_md": false 00:05:39.182 }, 00:05:39.182 "memory_domains": [ 00:05:39.182 { 00:05:39.182 "dma_device_id": "system", 00:05:39.182 "dma_device_type": 1 00:05:39.182 }, 00:05:39.182 { 00:05:39.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.182 "dma_device_type": 2 00:05:39.182 } 00:05:39.182 ], 00:05:39.182 "driver_specific": {} 00:05:39.182 } 00:05:39.182 ]' 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 [2024-12-07 10:17:38.476444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.182 [2024-12-07 10:17:38.476506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.182 [2024-12-07 10:17:38.476536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:39.182 [2024-12-07 10:17:38.476550] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.182 [2024-12-07 10:17:38.478933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.182 [2024-12-07 10:17:38.479103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.182 Passthru0 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.182 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.182 { 00:05:39.182 "name": "Malloc0", 00:05:39.182 "aliases": [ 00:05:39.182 "768256ab-4d12-4979-ac76-ac65421edced" 00:05:39.182 ], 00:05:39.182 "product_name": "Malloc disk", 00:05:39.183 "block_size": 512, 00:05:39.183 "num_blocks": 16384, 00:05:39.183 "uuid": "768256ab-4d12-4979-ac76-ac65421edced", 00:05:39.183 "assigned_rate_limits": { 00:05:39.183 "rw_ios_per_sec": 0, 00:05:39.183 "rw_mbytes_per_sec": 0, 00:05:39.183 "r_mbytes_per_sec": 0, 00:05:39.183 "w_mbytes_per_sec": 0 00:05:39.183 }, 00:05:39.183 "claimed": true, 00:05:39.183 "claim_type": "exclusive_write", 00:05:39.183 "zoned": false, 00:05:39.183 "supported_io_types": { 00:05:39.183 "read": true, 00:05:39.183 "write": true, 00:05:39.183 "unmap": true, 00:05:39.183 "flush": true, 00:05:39.183 "reset": true, 00:05:39.183 "nvme_admin": false, 00:05:39.183 "nvme_io": false, 00:05:39.183 "nvme_io_md": false, 00:05:39.183 "write_zeroes": true, 00:05:39.183 "zcopy": true, 00:05:39.183 "get_zone_info": false, 00:05:39.183 "zone_management": false, 00:05:39.183 "zone_append": false, 00:05:39.183 "compare": false, 00:05:39.183 "compare_and_write": false, 00:05:39.183 "abort": true, 00:05:39.183 "seek_hole": false, 00:05:39.183 "seek_data": false, 00:05:39.183 "copy": true, 00:05:39.183 "nvme_iov_md": false 00:05:39.183 }, 00:05:39.183 "memory_domains": [ 00:05:39.183 { 00:05:39.183 "dma_device_id": "system", 00:05:39.183 "dma_device_type": 1 00:05:39.183 }, 00:05:39.183 { 00:05:39.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.183 "dma_device_type": 2 00:05:39.183 } 00:05:39.183 ], 00:05:39.183 "driver_specific": {} 00:05:39.183 }, 00:05:39.183 { 00:05:39.183 "name": "Passthru0", 00:05:39.183 "aliases": [ 00:05:39.183 "0d5e62e9-ed93-5400-9130-edf448e41884" 00:05:39.183 ], 00:05:39.183 "product_name": "passthru", 00:05:39.183 "block_size": 512, 00:05:39.183 "num_blocks": 16384, 00:05:39.183 "uuid": "0d5e62e9-ed93-5400-9130-edf448e41884", 00:05:39.183 "assigned_rate_limits": { 00:05:39.183 "rw_ios_per_sec": 0, 00:05:39.183 "rw_mbytes_per_sec": 0, 00:05:39.183 "r_mbytes_per_sec": 0, 00:05:39.183 "w_mbytes_per_sec": 0 00:05:39.183 }, 00:05:39.183 "claimed": false, 00:05:39.183 "zoned": false, 00:05:39.183 "supported_io_types": { 00:05:39.183 "read": true, 00:05:39.183 "write": true, 00:05:39.183 "unmap": true, 00:05:39.183 "flush": true, 00:05:39.183 "reset": true, 00:05:39.183 "nvme_admin": false, 00:05:39.183 "nvme_io": false, 00:05:39.183 "nvme_io_md": false, 00:05:39.183 "write_zeroes": true, 00:05:39.183 "zcopy": true, 00:05:39.183 "get_zone_info": false, 00:05:39.183 "zone_management": false, 00:05:39.183 "zone_append": false, 00:05:39.183 "compare": false, 00:05:39.183 "compare_and_write": false, 00:05:39.183 "abort": true, 00:05:39.183 "seek_hole": false, 00:05:39.183 "seek_data": false, 00:05:39.183 "copy": true, 00:05:39.183 "nvme_iov_md": false 00:05:39.183 }, 00:05:39.183 "memory_domains": [ 00:05:39.183 { 00:05:39.183 "dma_device_id": "system", 00:05:39.183 "dma_device_type": 1 00:05:39.183 }, 00:05:39.183 { 00:05:39.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.183 "dma_device_type": 2 00:05:39.183 } 00:05:39.183 ], 00:05:39.183 "driver_specific": { 00:05:39.183 "passthru": { 00:05:39.183 "name": "Passthru0", 00:05:39.183 "base_bdev_name": "Malloc0" 00:05:39.183 } 00:05:39.183 } 00:05:39.183 } 00:05:39.183 ]' 00:05:39.183 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.442 ************************************ 00:05:39.442 END TEST rpc_integrity 00:05:39.442 ************************************ 00:05:39.442 10:17:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.442 00:05:39.442 real 0m0.335s 00:05:39.442 user 0m0.178s 00:05:39.442 sys 0m0.060s 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.442 10:17:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.442 10:17:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.442 10:17:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.442 10:17:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.442 10:17:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.442 ************************************ 00:05:39.442 START TEST rpc_plugins 00:05:39.442 ************************************ 00:05:39.442 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:39.442 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.442 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.443 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.443 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.443 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.443 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.443 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.443 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.443 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.443 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.443 { 00:05:39.443 "name": "Malloc1", 00:05:39.443 "aliases": [ 00:05:39.443 "afdda580-d1c2-4e8e-a779-e9b13245da81" 00:05:39.443 ], 00:05:39.443 "product_name": "Malloc disk", 00:05:39.443 "block_size": 4096, 00:05:39.443 "num_blocks": 256, 00:05:39.443 "uuid": "afdda580-d1c2-4e8e-a779-e9b13245da81", 00:05:39.443 "assigned_rate_limits": { 00:05:39.443 "rw_ios_per_sec": 0, 00:05:39.443 "rw_mbytes_per_sec": 0, 00:05:39.443 "r_mbytes_per_sec": 0, 00:05:39.443 "w_mbytes_per_sec": 0 00:05:39.443 }, 00:05:39.443 "claimed": false, 00:05:39.443 "zoned": false, 00:05:39.443 "supported_io_types": { 00:05:39.443 "read": true, 00:05:39.443 "write": true, 00:05:39.443 "unmap": true, 00:05:39.443 "flush": true, 00:05:39.443 "reset": true, 00:05:39.443 "nvme_admin": false, 00:05:39.443 "nvme_io": false, 00:05:39.443 "nvme_io_md": false, 00:05:39.443 "write_zeroes": true, 00:05:39.443 "zcopy": true, 00:05:39.443 "get_zone_info": false, 00:05:39.443 "zone_management": false, 00:05:39.443 "zone_append": false, 00:05:39.443 "compare": false, 00:05:39.443 "compare_and_write": false, 00:05:39.443 "abort": true, 00:05:39.443 "seek_hole": false, 00:05:39.443 "seek_data": false, 00:05:39.443 "copy": true, 00:05:39.443 "nvme_iov_md": false 00:05:39.443 }, 00:05:39.443 "memory_domains": [ 00:05:39.443 { 00:05:39.443 "dma_device_id": "system", 00:05:39.443 "dma_device_type": 1 00:05:39.443 }, 00:05:39.443 { 00:05:39.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.443 "dma_device_type": 2 00:05:39.443 } 00:05:39.443 ], 00:05:39.443 "driver_specific": {} 00:05:39.443 } 00:05:39.443 ]' 00:05:39.443 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:39.701 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.701 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.701 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.701 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.701 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.701 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.701 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.701 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.701 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.702 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.702 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:39.702 ************************************ 00:05:39.702 END TEST rpc_plugins 00:05:39.702 ************************************ 00:05:39.702 10:17:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.702 00:05:39.702 real 0m0.165s 00:05:39.702 user 0m0.086s 00:05:39.702 sys 0m0.038s 00:05:39.702 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.702 10:17:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.702 10:17:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.702 10:17:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.702 10:17:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.702 10:17:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.702 ************************************ 00:05:39.702 START TEST rpc_trace_cmd_test 00:05:39.702 ************************************ 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:39.702 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57734", 00:05:39.702 "tpoint_group_mask": "0x8", 00:05:39.702 "iscsi_conn": { 00:05:39.702 "mask": "0x2", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "scsi": { 00:05:39.702 "mask": "0x4", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "bdev": { 00:05:39.702 "mask": "0x8", 00:05:39.702 "tpoint_mask": "0xffffffffffffffff" 00:05:39.702 }, 00:05:39.702 "nvmf_rdma": { 00:05:39.702 "mask": "0x10", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "nvmf_tcp": { 00:05:39.702 "mask": "0x20", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "ftl": { 00:05:39.702 "mask": "0x40", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "blobfs": { 00:05:39.702 "mask": "0x80", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "dsa": { 00:05:39.702 "mask": "0x200", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "thread": { 00:05:39.702 "mask": "0x400", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "nvme_pcie": { 00:05:39.702 "mask": "0x800", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "iaa": { 00:05:39.702 "mask": "0x1000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "nvme_tcp": { 00:05:39.702 "mask": "0x2000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "bdev_nvme": { 00:05:39.702 "mask": "0x4000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "sock": { 00:05:39.702 "mask": "0x8000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "blob": { 00:05:39.702 "mask": "0x10000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "bdev_raid": { 00:05:39.702 "mask": "0x20000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 }, 00:05:39.702 "scheduler": { 00:05:39.702 "mask": "0x40000", 00:05:39.702 "tpoint_mask": "0x0" 00:05:39.702 } 00:05:39.702 }' 00:05:39.702 10:17:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:39.702 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:39.702 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:39.961 ************************************ 00:05:39.961 END TEST rpc_trace_cmd_test 00:05:39.961 ************************************ 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:39.961 00:05:39.961 real 0m0.258s 00:05:39.961 user 0m0.198s 00:05:39.961 sys 0m0.048s 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.961 10:17:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.961 10:17:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:39.961 10:17:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:39.961 10:17:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:39.961 10:17:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.961 10:17:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.961 10:17:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.961 ************************************ 00:05:39.961 START TEST rpc_daemon_integrity 00:05:39.961 ************************************ 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.961 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.220 { 00:05:40.220 "name": "Malloc2", 00:05:40.220 "aliases": [ 00:05:40.220 "1692c5e6-2f5b-4fd0-afec-0aba4904f160" 00:05:40.220 ], 00:05:40.220 "product_name": "Malloc disk", 00:05:40.220 "block_size": 512, 00:05:40.220 "num_blocks": 16384, 00:05:40.220 "uuid": "1692c5e6-2f5b-4fd0-afec-0aba4904f160", 00:05:40.220 "assigned_rate_limits": { 00:05:40.220 "rw_ios_per_sec": 0, 00:05:40.220 "rw_mbytes_per_sec": 0, 00:05:40.220 "r_mbytes_per_sec": 0, 00:05:40.220 "w_mbytes_per_sec": 0 00:05:40.220 }, 00:05:40.220 "claimed": false, 00:05:40.220 "zoned": false, 00:05:40.220 "supported_io_types": { 00:05:40.220 "read": true, 00:05:40.220 "write": true, 00:05:40.220 "unmap": true, 00:05:40.220 "flush": true, 00:05:40.220 "reset": true, 00:05:40.220 "nvme_admin": false, 00:05:40.220 "nvme_io": false, 00:05:40.220 "nvme_io_md": false, 00:05:40.220 "write_zeroes": true, 00:05:40.220 "zcopy": true, 00:05:40.220 "get_zone_info": false, 00:05:40.220 "zone_management": false, 00:05:40.220 "zone_append": false, 00:05:40.220 "compare": false, 00:05:40.220 "compare_and_write": false, 00:05:40.220 "abort": true, 00:05:40.220 "seek_hole": false, 00:05:40.220 "seek_data": false, 00:05:40.220 "copy": true, 00:05:40.220 "nvme_iov_md": false 00:05:40.220 }, 00:05:40.220 "memory_domains": [ 00:05:40.220 { 00:05:40.220 "dma_device_id": "system", 00:05:40.220 "dma_device_type": 1 00:05:40.220 }, 00:05:40.220 { 00:05:40.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.220 "dma_device_type": 2 00:05:40.220 } 00:05:40.220 ], 00:05:40.220 "driver_specific": {} 00:05:40.220 } 00:05:40.220 ]' 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.220 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.220 [2024-12-07 10:17:39.449468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:40.221 [2024-12-07 10:17:39.449637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.221 [2024-12-07 10:17:39.449681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:40.221 [2024-12-07 10:17:39.449696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.221 [2024-12-07 10:17:39.452077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.221 [2024-12-07 10:17:39.452119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.221 Passthru0 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.221 { 00:05:40.221 "name": "Malloc2", 00:05:40.221 "aliases": [ 00:05:40.221 "1692c5e6-2f5b-4fd0-afec-0aba4904f160" 00:05:40.221 ], 00:05:40.221 "product_name": "Malloc disk", 00:05:40.221 "block_size": 512, 00:05:40.221 "num_blocks": 16384, 00:05:40.221 "uuid": "1692c5e6-2f5b-4fd0-afec-0aba4904f160", 00:05:40.221 "assigned_rate_limits": { 00:05:40.221 "rw_ios_per_sec": 0, 00:05:40.221 "rw_mbytes_per_sec": 0, 00:05:40.221 "r_mbytes_per_sec": 0, 00:05:40.221 "w_mbytes_per_sec": 0 00:05:40.221 }, 00:05:40.221 "claimed": true, 00:05:40.221 "claim_type": "exclusive_write", 00:05:40.221 "zoned": false, 00:05:40.221 "supported_io_types": { 00:05:40.221 "read": true, 00:05:40.221 "write": true, 00:05:40.221 "unmap": true, 00:05:40.221 "flush": true, 00:05:40.221 "reset": true, 00:05:40.221 "nvme_admin": false, 00:05:40.221 "nvme_io": false, 00:05:40.221 "nvme_io_md": false, 00:05:40.221 "write_zeroes": true, 00:05:40.221 "zcopy": true, 00:05:40.221 "get_zone_info": false, 00:05:40.221 "zone_management": false, 00:05:40.221 "zone_append": false, 00:05:40.221 "compare": false, 00:05:40.221 "compare_and_write": false, 00:05:40.221 "abort": true, 00:05:40.221 "seek_hole": false, 00:05:40.221 "seek_data": false, 00:05:40.221 "copy": true, 00:05:40.221 "nvme_iov_md": false 00:05:40.221 }, 00:05:40.221 "memory_domains": [ 00:05:40.221 { 00:05:40.221 "dma_device_id": "system", 00:05:40.221 "dma_device_type": 1 00:05:40.221 }, 00:05:40.221 { 00:05:40.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.221 "dma_device_type": 2 00:05:40.221 } 00:05:40.221 ], 00:05:40.221 "driver_specific": {} 00:05:40.221 }, 00:05:40.221 { 00:05:40.221 "name": "Passthru0", 00:05:40.221 "aliases": [ 00:05:40.221 "b5f0e3d6-b4c2-5d31-81ae-8adabc52a43c" 00:05:40.221 ], 00:05:40.221 "product_name": "passthru", 00:05:40.221 "block_size": 512, 00:05:40.221 "num_blocks": 16384, 00:05:40.221 "uuid": "b5f0e3d6-b4c2-5d31-81ae-8adabc52a43c", 00:05:40.221 "assigned_rate_limits": { 00:05:40.221 "rw_ios_per_sec": 0, 00:05:40.221 "rw_mbytes_per_sec": 0, 00:05:40.221 "r_mbytes_per_sec": 0, 00:05:40.221 "w_mbytes_per_sec": 0 00:05:40.221 }, 00:05:40.221 "claimed": false, 00:05:40.221 "zoned": false, 00:05:40.221 "supported_io_types": { 00:05:40.221 "read": true, 00:05:40.221 "write": true, 00:05:40.221 "unmap": true, 00:05:40.221 "flush": true, 00:05:40.221 "reset": true, 00:05:40.221 "nvme_admin": false, 00:05:40.221 "nvme_io": false, 00:05:40.221 "nvme_io_md": false, 00:05:40.221 "write_zeroes": true, 00:05:40.221 "zcopy": true, 00:05:40.221 "get_zone_info": false, 00:05:40.221 "zone_management": false, 00:05:40.221 "zone_append": false, 00:05:40.221 "compare": false, 00:05:40.221 "compare_and_write": false, 00:05:40.221 "abort": true, 00:05:40.221 "seek_hole": false, 00:05:40.221 "seek_data": false, 00:05:40.221 "copy": true, 00:05:40.221 "nvme_iov_md": false 00:05:40.221 }, 00:05:40.221 "memory_domains": [ 00:05:40.221 { 00:05:40.221 "dma_device_id": "system", 00:05:40.221 "dma_device_type": 1 00:05:40.221 }, 00:05:40.221 { 00:05:40.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.221 "dma_device_type": 2 00:05:40.221 } 00:05:40.221 ], 00:05:40.221 "driver_specific": { 00:05:40.221 "passthru": { 00:05:40.221 "name": "Passthru0", 00:05:40.221 "base_bdev_name": "Malloc2" 00:05:40.221 } 00:05:40.221 } 00:05:40.221 } 00:05:40.221 ]' 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.221 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.480 ************************************ 00:05:40.480 END TEST rpc_daemon_integrity 00:05:40.480 ************************************ 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.480 00:05:40.480 real 0m0.363s 00:05:40.480 user 0m0.191s 00:05:40.480 sys 0m0.072s 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.480 10:17:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.480 10:17:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.480 10:17:39 rpc -- rpc/rpc.sh@84 -- # killprocess 57734 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 57734 ']' 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@958 -- # kill -0 57734 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57734 00:05:40.480 killing process with pid 57734 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57734' 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@973 -- # kill 57734 00:05:40.480 10:17:39 rpc -- common/autotest_common.sh@978 -- # wait 57734 00:05:43.017 00:05:43.017 real 0m5.235s 00:05:43.017 user 0m5.703s 00:05:43.017 sys 0m1.033s 00:05:43.017 10:17:42 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.017 ************************************ 00:05:43.017 END TEST rpc 00:05:43.017 ************************************ 00:05:43.017 10:17:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 10:17:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.017 10:17:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.017 10:17:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.017 10:17:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 ************************************ 00:05:43.017 START TEST skip_rpc 00:05:43.017 ************************************ 00:05:43.017 10:17:42 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.017 * Looking for test storage... 00:05:43.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.017 10:17:42 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.017 10:17:42 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.017 10:17:42 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.017 10:17:42 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.017 10:17:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.018 10:17:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.018 10:17:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.018 --rc genhtml_branch_coverage=1 00:05:43.018 --rc genhtml_function_coverage=1 00:05:43.018 --rc genhtml_legend=1 00:05:43.018 --rc geninfo_all_blocks=1 00:05:43.018 --rc geninfo_unexecuted_blocks=1 00:05:43.018 00:05:43.018 ' 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.018 --rc genhtml_branch_coverage=1 00:05:43.018 --rc genhtml_function_coverage=1 00:05:43.018 --rc genhtml_legend=1 00:05:43.018 --rc geninfo_all_blocks=1 00:05:43.018 --rc geninfo_unexecuted_blocks=1 00:05:43.018 00:05:43.018 ' 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.018 --rc genhtml_branch_coverage=1 00:05:43.018 --rc genhtml_function_coverage=1 00:05:43.018 --rc genhtml_legend=1 00:05:43.018 --rc geninfo_all_blocks=1 00:05:43.018 --rc geninfo_unexecuted_blocks=1 00:05:43.018 00:05:43.018 ' 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.018 --rc genhtml_branch_coverage=1 00:05:43.018 --rc genhtml_function_coverage=1 00:05:43.018 --rc genhtml_legend=1 00:05:43.018 --rc geninfo_all_blocks=1 00:05:43.018 --rc geninfo_unexecuted_blocks=1 00:05:43.018 00:05:43.018 ' 00:05:43.018 10:17:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.018 10:17:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.018 10:17:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.018 10:17:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.018 ************************************ 00:05:43.018 START TEST skip_rpc 00:05:43.018 ************************************ 00:05:43.277 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:43.277 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57969 00:05:43.277 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.277 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.277 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.277 [2024-12-07 10:17:42.476575] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:43.277 [2024-12-07 10:17:42.476864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57969 ] 00:05:43.537 [2024-12-07 10:17:42.655427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.537 [2024-12-07 10:17:42.760862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.812 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57969 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57969 ']' 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57969 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57969 00:05:48.813 killing process with pid 57969 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57969' 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57969 00:05:48.813 10:17:47 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57969 00:05:50.719 00:05:50.719 real 0m7.349s 00:05:50.719 user 0m6.860s 00:05:50.719 sys 0m0.417s 00:05:50.719 ************************************ 00:05:50.719 END TEST skip_rpc 00:05:50.719 ************************************ 00:05:50.719 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.719 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.719 10:17:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:50.719 10:17:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.719 10:17:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.719 10:17:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.719 ************************************ 00:05:50.719 START TEST skip_rpc_with_json 00:05:50.719 ************************************ 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58073 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58073 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58073 ']' 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.719 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.720 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.720 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.720 [2024-12-07 10:17:49.901884] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:50.720 [2024-12-07 10:17:49.902451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58073 ] 00:05:50.978 [2024-12-07 10:17:50.081119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.978 [2024-12-07 10:17:50.186655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 [2024-12-07 10:17:51.035146] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:51.912 request: 00:05:51.912 { 00:05:51.912 "trtype": "tcp", 00:05:51.912 "method": "nvmf_get_transports", 00:05:51.912 "req_id": 1 00:05:51.912 } 00:05:51.912 Got JSON-RPC error response 00:05:51.912 response: 00:05:51.912 { 00:05:51.912 "code": -19, 00:05:51.912 "message": "No such device" 00:05:51.912 } 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 [2024-12-07 10:17:51.051255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.912 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.912 { 00:05:51.912 "subsystems": [ 00:05:51.912 { 00:05:51.912 "subsystem": "fsdev", 00:05:51.912 "config": [ 00:05:51.912 { 00:05:51.912 "method": "fsdev_set_opts", 00:05:51.912 "params": { 00:05:51.912 "fsdev_io_pool_size": 65535, 00:05:51.912 "fsdev_io_cache_size": 256 00:05:51.912 } 00:05:51.912 } 00:05:51.912 ] 00:05:51.912 }, 00:05:51.912 { 00:05:51.912 "subsystem": "keyring", 00:05:51.912 "config": [] 00:05:51.912 }, 00:05:51.912 { 00:05:51.912 "subsystem": "iobuf", 00:05:51.912 "config": [ 00:05:51.912 { 00:05:51.912 "method": "iobuf_set_options", 00:05:51.913 "params": { 00:05:51.913 "small_pool_count": 8192, 00:05:51.913 "large_pool_count": 1024, 00:05:51.913 "small_bufsize": 8192, 00:05:51.913 "large_bufsize": 135168, 00:05:51.913 "enable_numa": false 00:05:51.913 } 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "sock", 00:05:51.913 "config": [ 00:05:51.913 { 00:05:51.913 "method": "sock_set_default_impl", 00:05:51.913 "params": { 00:05:51.913 "impl_name": "posix" 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "sock_impl_set_options", 00:05:51.913 "params": { 00:05:51.913 "impl_name": "ssl", 00:05:51.913 "recv_buf_size": 4096, 00:05:51.913 "send_buf_size": 4096, 00:05:51.913 "enable_recv_pipe": true, 00:05:51.913 "enable_quickack": false, 00:05:51.913 "enable_placement_id": 0, 00:05:51.913 "enable_zerocopy_send_server": true, 00:05:51.913 "enable_zerocopy_send_client": false, 00:05:51.913 "zerocopy_threshold": 0, 00:05:51.913 "tls_version": 0, 00:05:51.913 "enable_ktls": false 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "sock_impl_set_options", 00:05:51.913 "params": { 00:05:51.913 "impl_name": "posix", 00:05:51.913 "recv_buf_size": 2097152, 00:05:51.913 "send_buf_size": 2097152, 00:05:51.913 "enable_recv_pipe": true, 00:05:51.913 "enable_quickack": false, 00:05:51.913 "enable_placement_id": 0, 00:05:51.913 "enable_zerocopy_send_server": true, 00:05:51.913 "enable_zerocopy_send_client": false, 00:05:51.913 "zerocopy_threshold": 0, 00:05:51.913 "tls_version": 0, 00:05:51.913 "enable_ktls": false 00:05:51.913 } 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "vmd", 00:05:51.913 "config": [] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "accel", 00:05:51.913 "config": [ 00:05:51.913 { 00:05:51.913 "method": "accel_set_options", 00:05:51.913 "params": { 00:05:51.913 "small_cache_size": 128, 00:05:51.913 "large_cache_size": 16, 00:05:51.913 "task_count": 2048, 00:05:51.913 "sequence_count": 2048, 00:05:51.913 "buf_count": 2048 00:05:51.913 } 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "bdev", 00:05:51.913 "config": [ 00:05:51.913 { 00:05:51.913 "method": "bdev_set_options", 00:05:51.913 "params": { 00:05:51.913 "bdev_io_pool_size": 65535, 00:05:51.913 "bdev_io_cache_size": 256, 00:05:51.913 "bdev_auto_examine": true, 00:05:51.913 "iobuf_small_cache_size": 128, 00:05:51.913 "iobuf_large_cache_size": 16 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "bdev_raid_set_options", 00:05:51.913 "params": { 00:05:51.913 "process_window_size_kb": 1024, 00:05:51.913 "process_max_bandwidth_mb_sec": 0 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "bdev_iscsi_set_options", 00:05:51.913 "params": { 00:05:51.913 "timeout_sec": 30 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "bdev_nvme_set_options", 00:05:51.913 "params": { 00:05:51.913 "action_on_timeout": "none", 00:05:51.913 "timeout_us": 0, 00:05:51.913 "timeout_admin_us": 0, 00:05:51.913 "keep_alive_timeout_ms": 10000, 00:05:51.913 "arbitration_burst": 0, 00:05:51.913 "low_priority_weight": 0, 00:05:51.913 "medium_priority_weight": 0, 00:05:51.913 "high_priority_weight": 0, 00:05:51.913 "nvme_adminq_poll_period_us": 10000, 00:05:51.913 "nvme_ioq_poll_period_us": 0, 00:05:51.913 "io_queue_requests": 0, 00:05:51.913 "delay_cmd_submit": true, 00:05:51.913 "transport_retry_count": 4, 00:05:51.913 "bdev_retry_count": 3, 00:05:51.913 "transport_ack_timeout": 0, 00:05:51.913 "ctrlr_loss_timeout_sec": 0, 00:05:51.913 "reconnect_delay_sec": 0, 00:05:51.913 "fast_io_fail_timeout_sec": 0, 00:05:51.913 "disable_auto_failback": false, 00:05:51.913 "generate_uuids": false, 00:05:51.913 "transport_tos": 0, 00:05:51.913 "nvme_error_stat": false, 00:05:51.913 "rdma_srq_size": 0, 00:05:51.913 "io_path_stat": false, 00:05:51.913 "allow_accel_sequence": false, 00:05:51.913 "rdma_max_cq_size": 0, 00:05:51.913 "rdma_cm_event_timeout_ms": 0, 00:05:51.913 "dhchap_digests": [ 00:05:51.913 "sha256", 00:05:51.913 "sha384", 00:05:51.913 "sha512" 00:05:51.913 ], 00:05:51.913 "dhchap_dhgroups": [ 00:05:51.913 "null", 00:05:51.913 "ffdhe2048", 00:05:51.913 "ffdhe3072", 00:05:51.913 "ffdhe4096", 00:05:51.913 "ffdhe6144", 00:05:51.913 "ffdhe8192" 00:05:51.913 ] 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "bdev_nvme_set_hotplug", 00:05:51.913 "params": { 00:05:51.913 "period_us": 100000, 00:05:51.913 "enable": false 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "bdev_wait_for_examine" 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "scsi", 00:05:51.913 "config": null 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "scheduler", 00:05:51.913 "config": [ 00:05:51.913 { 00:05:51.913 "method": "framework_set_scheduler", 00:05:51.913 "params": { 00:05:51.913 "name": "static" 00:05:51.913 } 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "vhost_scsi", 00:05:51.913 "config": [] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "vhost_blk", 00:05:51.913 "config": [] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "ublk", 00:05:51.913 "config": [] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "nbd", 00:05:51.913 "config": [] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "nvmf", 00:05:51.913 "config": [ 00:05:51.913 { 00:05:51.913 "method": "nvmf_set_config", 00:05:51.913 "params": { 00:05:51.913 "discovery_filter": "match_any", 00:05:51.913 "admin_cmd_passthru": { 00:05:51.913 "identify_ctrlr": false 00:05:51.913 }, 00:05:51.913 "dhchap_digests": [ 00:05:51.913 "sha256", 00:05:51.913 "sha384", 00:05:51.913 "sha512" 00:05:51.913 ], 00:05:51.913 "dhchap_dhgroups": [ 00:05:51.913 "null", 00:05:51.913 "ffdhe2048", 00:05:51.913 "ffdhe3072", 00:05:51.913 "ffdhe4096", 00:05:51.913 "ffdhe6144", 00:05:51.913 "ffdhe8192" 00:05:51.913 ] 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "nvmf_set_max_subsystems", 00:05:51.913 "params": { 00:05:51.913 "max_subsystems": 1024 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "nvmf_set_crdt", 00:05:51.913 "params": { 00:05:51.913 "crdt1": 0, 00:05:51.913 "crdt2": 0, 00:05:51.913 "crdt3": 0 00:05:51.913 } 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "method": "nvmf_create_transport", 00:05:51.913 "params": { 00:05:51.913 "trtype": "TCP", 00:05:51.913 "max_queue_depth": 128, 00:05:51.913 "max_io_qpairs_per_ctrlr": 127, 00:05:51.913 "in_capsule_data_size": 4096, 00:05:51.913 "max_io_size": 131072, 00:05:51.913 "io_unit_size": 131072, 00:05:51.913 "max_aq_depth": 128, 00:05:51.913 "num_shared_buffers": 511, 00:05:51.913 "buf_cache_size": 4294967295, 00:05:51.913 "dif_insert_or_strip": false, 00:05:51.913 "zcopy": false, 00:05:51.913 "c2h_success": true, 00:05:51.913 "sock_priority": 0, 00:05:51.913 "abort_timeout_sec": 1, 00:05:51.913 "ack_timeout": 0, 00:05:51.913 "data_wr_pool_size": 0 00:05:51.913 } 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 }, 00:05:51.913 { 00:05:51.913 "subsystem": "iscsi", 00:05:51.913 "config": [ 00:05:51.913 { 00:05:51.913 "method": "iscsi_set_options", 00:05:51.913 "params": { 00:05:51.913 "node_base": "iqn.2016-06.io.spdk", 00:05:51.913 "max_sessions": 128, 00:05:51.913 "max_connections_per_session": 2, 00:05:51.913 "max_queue_depth": 64, 00:05:51.913 "default_time2wait": 2, 00:05:51.913 "default_time2retain": 20, 00:05:51.913 "first_burst_length": 8192, 00:05:51.913 "immediate_data": true, 00:05:51.913 "allow_duplicated_isid": false, 00:05:51.913 "error_recovery_level": 0, 00:05:51.913 "nop_timeout": 60, 00:05:51.913 "nop_in_interval": 30, 00:05:51.913 "disable_chap": false, 00:05:51.913 "require_chap": false, 00:05:51.913 "mutual_chap": false, 00:05:51.913 "chap_group": 0, 00:05:51.913 "max_large_datain_per_connection": 64, 00:05:51.913 "max_r2t_per_connection": 4, 00:05:51.913 "pdu_pool_size": 36864, 00:05:51.913 "immediate_data_pool_size": 16384, 00:05:51.913 "data_out_pool_size": 2048 00:05:51.913 } 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 } 00:05:51.913 ] 00:05:51.913 } 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58073 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58073 ']' 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58073 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.913 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58073 00:05:52.171 killing process with pid 58073 00:05:52.171 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.171 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.171 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58073' 00:05:52.171 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58073 00:05:52.171 10:17:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58073 00:05:54.706 10:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58122 00:05:54.706 10:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.706 10:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58122 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58122 ']' 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58122 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58122 00:05:59.983 killing process with pid 58122 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58122' 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58122 00:05:59.983 10:17:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58122 00:06:01.889 10:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.889 10:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.889 00:06:01.889 real 0m11.162s 00:06:01.889 user 0m10.539s 00:06:01.889 sys 0m0.922s 00:06:01.889 10:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.889 ************************************ 00:06:01.889 10:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.889 END TEST skip_rpc_with_json 00:06:01.889 ************************************ 00:06:01.889 10:18:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.889 10:18:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.889 10:18:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.889 10:18:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.889 ************************************ 00:06:01.889 START TEST skip_rpc_with_delay 00:06:01.889 ************************************ 00:06:01.889 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.890 [2024-12-07 10:18:01.143869] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.890 00:06:01.890 real 0m0.183s 00:06:01.890 user 0m0.097s 00:06:01.890 sys 0m0.084s 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.890 10:18:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.890 ************************************ 00:06:01.890 END TEST skip_rpc_with_delay 00:06:01.890 ************************************ 00:06:02.149 10:18:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:02.149 10:18:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:02.149 10:18:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:02.149 10:18:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.149 10:18:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.149 10:18:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.149 ************************************ 00:06:02.149 START TEST exit_on_failed_rpc_init 00:06:02.149 ************************************ 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58257 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58257 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58257 ']' 00:06:02.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.149 10:18:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.149 [2024-12-07 10:18:01.409934] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:02.149 [2024-12-07 10:18:01.410096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58257 ] 00:06:02.409 [2024-12-07 10:18:01.592070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.409 [2024-12-07 10:18:01.700257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.348 10:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.348 [2024-12-07 10:18:02.671587] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:03.348 [2024-12-07 10:18:02.671912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58275 ] 00:06:03.608 [2024-12-07 10:18:02.858845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.867 [2024-12-07 10:18:02.973681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.867 [2024-12-07 10:18:02.973769] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.867 [2024-12-07 10:18:02.973802] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.867 [2024-12-07 10:18:02.973822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58257 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58257 ']' 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58257 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58257 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.127 killing process with pid 58257 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58257' 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58257 00:06:04.127 10:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58257 00:06:06.665 00:06:06.665 real 0m4.339s 00:06:06.665 user 0m4.627s 00:06:06.665 sys 0m0.641s 00:06:06.665 10:18:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.665 ************************************ 00:06:06.665 END TEST exit_on_failed_rpc_init 00:06:06.665 ************************************ 00:06:06.665 10:18:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 10:18:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.665 00:06:06.665 real 0m23.584s 00:06:06.665 user 0m22.351s 00:06:06.665 sys 0m2.387s 00:06:06.665 10:18:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.665 ************************************ 00:06:06.665 END TEST skip_rpc 00:06:06.665 ************************************ 00:06:06.665 10:18:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 10:18:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.665 10:18:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.665 10:18:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.665 10:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 ************************************ 00:06:06.665 START TEST rpc_client 00:06:06.665 ************************************ 00:06:06.665 10:18:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.665 * Looking for test storage... 00:06:06.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:06.665 10:18:05 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.665 10:18:05 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.665 10:18:05 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.665 10:18:05 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.665 10:18:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.665 10:18:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.665 10:18:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.665 10:18:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.666 10:18:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:06.666 10:18:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:06.666 10:18:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.666 10:18:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:06.666 10:18:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.666 10:18:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.925 10:18:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:06.925 10:18:06 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.925 10:18:06 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.925 --rc genhtml_branch_coverage=1 00:06:06.925 --rc genhtml_function_coverage=1 00:06:06.925 --rc genhtml_legend=1 00:06:06.925 --rc geninfo_all_blocks=1 00:06:06.925 --rc geninfo_unexecuted_blocks=1 00:06:06.925 00:06:06.925 ' 00:06:06.925 10:18:06 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.925 --rc genhtml_branch_coverage=1 00:06:06.925 --rc genhtml_function_coverage=1 00:06:06.925 --rc genhtml_legend=1 00:06:06.925 --rc geninfo_all_blocks=1 00:06:06.925 --rc geninfo_unexecuted_blocks=1 00:06:06.925 00:06:06.925 ' 00:06:06.925 10:18:06 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.925 --rc genhtml_branch_coverage=1 00:06:06.925 --rc genhtml_function_coverage=1 00:06:06.925 --rc genhtml_legend=1 00:06:06.925 --rc geninfo_all_blocks=1 00:06:06.925 --rc geninfo_unexecuted_blocks=1 00:06:06.925 00:06:06.925 ' 00:06:06.925 10:18:06 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.925 --rc genhtml_branch_coverage=1 00:06:06.925 --rc genhtml_function_coverage=1 00:06:06.925 --rc genhtml_legend=1 00:06:06.925 --rc geninfo_all_blocks=1 00:06:06.925 --rc geninfo_unexecuted_blocks=1 00:06:06.925 00:06:06.925 ' 00:06:06.925 10:18:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:06.925 OK 00:06:06.926 10:18:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.926 00:06:06.926 real 0m0.325s 00:06:06.926 user 0m0.171s 00:06:06.926 sys 0m0.171s 00:06:06.926 10:18:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.926 ************************************ 00:06:06.926 END TEST rpc_client 00:06:06.926 ************************************ 00:06:06.926 10:18:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:06.926 10:18:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.926 10:18:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.926 10:18:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.926 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:06:06.926 ************************************ 00:06:06.926 START TEST json_config 00:06:06.926 ************************************ 00:06:06.926 10:18:06 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.926 10:18:06 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.926 10:18:06 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.926 10:18:06 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.186 10:18:06 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.186 10:18:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.186 10:18:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.186 10:18:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.186 10:18:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.187 10:18:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.187 10:18:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.187 10:18:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.187 10:18:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.187 10:18:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.187 10:18:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:07.187 10:18:06 json_config -- scripts/common.sh@345 -- # : 1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.187 10:18:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.187 10:18:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@353 -- # local d=1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.187 10:18:06 json_config -- scripts/common.sh@355 -- # echo 1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.187 10:18:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:07.187 10:18:06 json_config -- scripts/common.sh@353 -- # local d=2 00:06:07.187 10:18:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.187 10:18:06 json_config -- scripts/common.sh@355 -- # echo 2 00:06:07.187 10:18:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.187 10:18:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.187 10:18:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.187 10:18:06 json_config -- scripts/common.sh@368 -- # return 0 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.187 --rc genhtml_branch_coverage=1 00:06:07.187 --rc genhtml_function_coverage=1 00:06:07.187 --rc genhtml_legend=1 00:06:07.187 --rc geninfo_all_blocks=1 00:06:07.187 --rc geninfo_unexecuted_blocks=1 00:06:07.187 00:06:07.187 ' 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.187 --rc genhtml_branch_coverage=1 00:06:07.187 --rc genhtml_function_coverage=1 00:06:07.187 --rc genhtml_legend=1 00:06:07.187 --rc geninfo_all_blocks=1 00:06:07.187 --rc geninfo_unexecuted_blocks=1 00:06:07.187 00:06:07.187 ' 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.187 --rc genhtml_branch_coverage=1 00:06:07.187 --rc genhtml_function_coverage=1 00:06:07.187 --rc genhtml_legend=1 00:06:07.187 --rc geninfo_all_blocks=1 00:06:07.187 --rc geninfo_unexecuted_blocks=1 00:06:07.187 00:06:07.187 ' 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.187 --rc genhtml_branch_coverage=1 00:06:07.187 --rc genhtml_function_coverage=1 00:06:07.187 --rc genhtml_legend=1 00:06:07.187 --rc geninfo_all_blocks=1 00:06:07.187 --rc geninfo_unexecuted_blocks=1 00:06:07.187 00:06:07.187 ' 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b25f9a59-3323-475f-a653-2ff14ee861c0 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b25f9a59-3323-475f-a653-2ff14ee861c0 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.187 10:18:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.187 10:18:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.187 10:18:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.187 10:18:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.187 10:18:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.187 10:18:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.187 10:18:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.187 10:18:06 json_config -- paths/export.sh@5 -- # export PATH 00:06:07.187 10:18:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@51 -- # : 0 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.187 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.187 10:18:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:07.187 WARNING: No tests are enabled so not running JSON configuration tests 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:07.187 10:18:06 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:07.187 00:06:07.187 real 0m0.229s 00:06:07.187 user 0m0.129s 00:06:07.187 sys 0m0.110s 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.187 10:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.187 ************************************ 00:06:07.187 END TEST json_config 00:06:07.187 ************************************ 00:06:07.188 10:18:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.188 10:18:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.188 10:18:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.188 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.188 ************************************ 00:06:07.188 START TEST json_config_extra_key 00:06:07.188 ************************************ 00:06:07.188 10:18:06 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.456 10:18:06 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.456 10:18:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.456 10:18:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.456 10:18:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.456 10:18:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.457 10:18:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.457 10:18:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.457 --rc genhtml_branch_coverage=1 00:06:07.457 --rc genhtml_function_coverage=1 00:06:07.457 --rc genhtml_legend=1 00:06:07.457 --rc geninfo_all_blocks=1 00:06:07.457 --rc geninfo_unexecuted_blocks=1 00:06:07.457 00:06:07.457 ' 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.457 --rc genhtml_branch_coverage=1 00:06:07.457 --rc genhtml_function_coverage=1 00:06:07.457 --rc genhtml_legend=1 00:06:07.457 --rc geninfo_all_blocks=1 00:06:07.457 --rc geninfo_unexecuted_blocks=1 00:06:07.457 00:06:07.457 ' 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.457 --rc genhtml_branch_coverage=1 00:06:07.457 --rc genhtml_function_coverage=1 00:06:07.457 --rc genhtml_legend=1 00:06:07.457 --rc geninfo_all_blocks=1 00:06:07.457 --rc geninfo_unexecuted_blocks=1 00:06:07.457 00:06:07.457 ' 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.457 --rc genhtml_branch_coverage=1 00:06:07.457 --rc genhtml_function_coverage=1 00:06:07.457 --rc genhtml_legend=1 00:06:07.457 --rc geninfo_all_blocks=1 00:06:07.457 --rc geninfo_unexecuted_blocks=1 00:06:07.457 00:06:07.457 ' 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b25f9a59-3323-475f-a653-2ff14ee861c0 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b25f9a59-3323-475f-a653-2ff14ee861c0 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.457 10:18:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.457 10:18:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.457 10:18:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.457 10:18:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.457 10:18:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.457 10:18:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.457 10:18:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.457 10:18:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:07.457 10:18:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.457 10:18:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:07.457 INFO: launching applications... 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:07.457 10:18:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58485 00:06:07.457 Waiting for target to run... 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58485 /var/tmp/spdk_tgt.sock 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58485 ']' 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.457 10:18:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.457 10:18:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.716 [2024-12-07 10:18:06.811689] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:07.716 [2024-12-07 10:18:06.811824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 00:06:08.283 [2024-12-07 10:18:07.379743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.283 [2024-12-07 10:18:07.496554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.871 10:18:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.871 10:18:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:08.871 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.871 INFO: shutting down applications... 00:06:08.871 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.871 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58485 ]] 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58485 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:08.871 10:18:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.513 10:18:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.513 10:18:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.513 10:18:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:09.513 10:18:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.080 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.080 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.080 10:18:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:10.080 10:18:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.645 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.645 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.645 10:18:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:10.645 10:18:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.903 10:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.903 10:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.903 10:18:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:10.903 10:18:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.470 10:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.470 10:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.470 10:18:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:11.470 10:18:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.039 SPDK target shutdown done 00:06:12.039 10:18:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.039 Success 00:06:12.039 10:18:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.039 00:06:12.039 real 0m4.749s 00:06:12.039 user 0m3.898s 00:06:12.039 sys 0m0.777s 00:06:12.039 10:18:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.039 10:18:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.039 ************************************ 00:06:12.039 END TEST json_config_extra_key 00:06:12.039 ************************************ 00:06:12.039 10:18:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.039 10:18:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.039 10:18:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.039 10:18:11 -- common/autotest_common.sh@10 -- # set +x 00:06:12.039 ************************************ 00:06:12.039 START TEST alias_rpc 00:06:12.039 ************************************ 00:06:12.039 10:18:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.299 * Looking for test storage... 00:06:12.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.299 10:18:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.299 --rc genhtml_branch_coverage=1 00:06:12.299 --rc genhtml_function_coverage=1 00:06:12.299 --rc genhtml_legend=1 00:06:12.299 --rc geninfo_all_blocks=1 00:06:12.299 --rc geninfo_unexecuted_blocks=1 00:06:12.299 00:06:12.299 ' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.299 --rc genhtml_branch_coverage=1 00:06:12.299 --rc genhtml_function_coverage=1 00:06:12.299 --rc genhtml_legend=1 00:06:12.299 --rc geninfo_all_blocks=1 00:06:12.299 --rc geninfo_unexecuted_blocks=1 00:06:12.299 00:06:12.299 ' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.299 --rc genhtml_branch_coverage=1 00:06:12.299 --rc genhtml_function_coverage=1 00:06:12.299 --rc genhtml_legend=1 00:06:12.299 --rc geninfo_all_blocks=1 00:06:12.299 --rc geninfo_unexecuted_blocks=1 00:06:12.299 00:06:12.299 ' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.299 --rc genhtml_branch_coverage=1 00:06:12.299 --rc genhtml_function_coverage=1 00:06:12.299 --rc genhtml_legend=1 00:06:12.299 --rc geninfo_all_blocks=1 00:06:12.299 --rc geninfo_unexecuted_blocks=1 00:06:12.299 00:06:12.299 ' 00:06:12.299 10:18:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.299 10:18:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58601 00:06:12.299 10:18:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58601 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58601 ']' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.299 10:18:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.299 10:18:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.558 [2024-12-07 10:18:11.667203] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:12.558 [2024-12-07 10:18:11.667334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58601 ] 00:06:12.558 [2024-12-07 10:18:11.852031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.817 [2024-12-07 10:18:11.959015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.755 10:18:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.755 10:18:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.755 10:18:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:13.755 10:18:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58601 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58601 ']' 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58601 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58601 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.755 killing process with pid 58601 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58601' 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 58601 00:06:13.755 10:18:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 58601 00:06:16.293 00:06:16.293 real 0m4.070s 00:06:16.293 user 0m3.935s 00:06:16.293 sys 0m0.642s 00:06:16.293 10:18:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.293 10:18:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.293 ************************************ 00:06:16.293 END TEST alias_rpc 00:06:16.293 ************************************ 00:06:16.293 10:18:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:16.293 10:18:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.293 10:18:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.293 10:18:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.293 10:18:15 -- common/autotest_common.sh@10 -- # set +x 00:06:16.293 ************************************ 00:06:16.293 START TEST spdkcli_tcp 00:06:16.293 ************************************ 00:06:16.293 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.293 * Looking for test storage... 00:06:16.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:16.293 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.293 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.293 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.552 10:18:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.552 --rc genhtml_branch_coverage=1 00:06:16.552 --rc genhtml_function_coverage=1 00:06:16.552 --rc genhtml_legend=1 00:06:16.552 --rc geninfo_all_blocks=1 00:06:16.552 --rc geninfo_unexecuted_blocks=1 00:06:16.552 00:06:16.552 ' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.552 --rc genhtml_branch_coverage=1 00:06:16.552 --rc genhtml_function_coverage=1 00:06:16.552 --rc genhtml_legend=1 00:06:16.552 --rc geninfo_all_blocks=1 00:06:16.552 --rc geninfo_unexecuted_blocks=1 00:06:16.552 00:06:16.552 ' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.552 --rc genhtml_branch_coverage=1 00:06:16.552 --rc genhtml_function_coverage=1 00:06:16.552 --rc genhtml_legend=1 00:06:16.552 --rc geninfo_all_blocks=1 00:06:16.552 --rc geninfo_unexecuted_blocks=1 00:06:16.552 00:06:16.552 ' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.552 --rc genhtml_branch_coverage=1 00:06:16.552 --rc genhtml_function_coverage=1 00:06:16.552 --rc genhtml_legend=1 00:06:16.552 --rc geninfo_all_blocks=1 00:06:16.552 --rc geninfo_unexecuted_blocks=1 00:06:16.552 00:06:16.552 ' 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58704 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.552 10:18:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58704 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58704 ']' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.552 10:18:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.553 10:18:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.553 [2024-12-07 10:18:15.818855] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:16.553 [2024-12-07 10:18:15.819001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58704 ] 00:06:16.811 [2024-12-07 10:18:16.005961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.811 [2024-12-07 10:18:16.117400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.811 [2024-12-07 10:18:16.117449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.747 10:18:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.747 10:18:17 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:17.747 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58726 00:06:17.747 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.747 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.006 [ 00:06:18.006 "bdev_malloc_delete", 00:06:18.006 "bdev_malloc_create", 00:06:18.006 "bdev_null_resize", 00:06:18.006 "bdev_null_delete", 00:06:18.006 "bdev_null_create", 00:06:18.006 "bdev_nvme_cuse_unregister", 00:06:18.006 "bdev_nvme_cuse_register", 00:06:18.006 "bdev_opal_new_user", 00:06:18.006 "bdev_opal_set_lock_state", 00:06:18.006 "bdev_opal_delete", 00:06:18.006 "bdev_opal_get_info", 00:06:18.006 "bdev_opal_create", 00:06:18.006 "bdev_nvme_opal_revert", 00:06:18.006 "bdev_nvme_opal_init", 00:06:18.006 "bdev_nvme_send_cmd", 00:06:18.006 "bdev_nvme_set_keys", 00:06:18.006 "bdev_nvme_get_path_iostat", 00:06:18.006 "bdev_nvme_get_mdns_discovery_info", 00:06:18.006 "bdev_nvme_stop_mdns_discovery", 00:06:18.006 "bdev_nvme_start_mdns_discovery", 00:06:18.006 "bdev_nvme_set_multipath_policy", 00:06:18.006 "bdev_nvme_set_preferred_path", 00:06:18.006 "bdev_nvme_get_io_paths", 00:06:18.006 "bdev_nvme_remove_error_injection", 00:06:18.006 "bdev_nvme_add_error_injection", 00:06:18.006 "bdev_nvme_get_discovery_info", 00:06:18.006 "bdev_nvme_stop_discovery", 00:06:18.006 "bdev_nvme_start_discovery", 00:06:18.006 "bdev_nvme_get_controller_health_info", 00:06:18.006 "bdev_nvme_disable_controller", 00:06:18.006 "bdev_nvme_enable_controller", 00:06:18.006 "bdev_nvme_reset_controller", 00:06:18.006 "bdev_nvme_get_transport_statistics", 00:06:18.006 "bdev_nvme_apply_firmware", 00:06:18.006 "bdev_nvme_detach_controller", 00:06:18.006 "bdev_nvme_get_controllers", 00:06:18.006 "bdev_nvme_attach_controller", 00:06:18.006 "bdev_nvme_set_hotplug", 00:06:18.006 "bdev_nvme_set_options", 00:06:18.006 "bdev_passthru_delete", 00:06:18.006 "bdev_passthru_create", 00:06:18.006 "bdev_lvol_set_parent_bdev", 00:06:18.006 "bdev_lvol_set_parent", 00:06:18.006 "bdev_lvol_check_shallow_copy", 00:06:18.006 "bdev_lvol_start_shallow_copy", 00:06:18.007 "bdev_lvol_grow_lvstore", 00:06:18.007 "bdev_lvol_get_lvols", 00:06:18.007 "bdev_lvol_get_lvstores", 00:06:18.007 "bdev_lvol_delete", 00:06:18.007 "bdev_lvol_set_read_only", 00:06:18.007 "bdev_lvol_resize", 00:06:18.007 "bdev_lvol_decouple_parent", 00:06:18.007 "bdev_lvol_inflate", 00:06:18.007 "bdev_lvol_rename", 00:06:18.007 "bdev_lvol_clone_bdev", 00:06:18.007 "bdev_lvol_clone", 00:06:18.007 "bdev_lvol_snapshot", 00:06:18.007 "bdev_lvol_create", 00:06:18.007 "bdev_lvol_delete_lvstore", 00:06:18.007 "bdev_lvol_rename_lvstore", 00:06:18.007 "bdev_lvol_create_lvstore", 00:06:18.007 "bdev_raid_set_options", 00:06:18.007 "bdev_raid_remove_base_bdev", 00:06:18.007 "bdev_raid_add_base_bdev", 00:06:18.007 "bdev_raid_delete", 00:06:18.007 "bdev_raid_create", 00:06:18.007 "bdev_raid_get_bdevs", 00:06:18.007 "bdev_error_inject_error", 00:06:18.007 "bdev_error_delete", 00:06:18.007 "bdev_error_create", 00:06:18.007 "bdev_split_delete", 00:06:18.007 "bdev_split_create", 00:06:18.007 "bdev_delay_delete", 00:06:18.007 "bdev_delay_create", 00:06:18.007 "bdev_delay_update_latency", 00:06:18.007 "bdev_zone_block_delete", 00:06:18.007 "bdev_zone_block_create", 00:06:18.007 "blobfs_create", 00:06:18.007 "blobfs_detect", 00:06:18.007 "blobfs_set_cache_size", 00:06:18.007 "bdev_xnvme_delete", 00:06:18.007 "bdev_xnvme_create", 00:06:18.007 "bdev_aio_delete", 00:06:18.007 "bdev_aio_rescan", 00:06:18.007 "bdev_aio_create", 00:06:18.007 "bdev_ftl_set_property", 00:06:18.007 "bdev_ftl_get_properties", 00:06:18.007 "bdev_ftl_get_stats", 00:06:18.007 "bdev_ftl_unmap", 00:06:18.007 "bdev_ftl_unload", 00:06:18.007 "bdev_ftl_delete", 00:06:18.007 "bdev_ftl_load", 00:06:18.007 "bdev_ftl_create", 00:06:18.007 "bdev_virtio_attach_controller", 00:06:18.007 "bdev_virtio_scsi_get_devices", 00:06:18.007 "bdev_virtio_detach_controller", 00:06:18.007 "bdev_virtio_blk_set_hotplug", 00:06:18.007 "bdev_iscsi_delete", 00:06:18.007 "bdev_iscsi_create", 00:06:18.007 "bdev_iscsi_set_options", 00:06:18.007 "accel_error_inject_error", 00:06:18.007 "ioat_scan_accel_module", 00:06:18.007 "dsa_scan_accel_module", 00:06:18.007 "iaa_scan_accel_module", 00:06:18.007 "keyring_file_remove_key", 00:06:18.007 "keyring_file_add_key", 00:06:18.007 "keyring_linux_set_options", 00:06:18.007 "fsdev_aio_delete", 00:06:18.007 "fsdev_aio_create", 00:06:18.007 "iscsi_get_histogram", 00:06:18.007 "iscsi_enable_histogram", 00:06:18.007 "iscsi_set_options", 00:06:18.007 "iscsi_get_auth_groups", 00:06:18.007 "iscsi_auth_group_remove_secret", 00:06:18.007 "iscsi_auth_group_add_secret", 00:06:18.007 "iscsi_delete_auth_group", 00:06:18.007 "iscsi_create_auth_group", 00:06:18.007 "iscsi_set_discovery_auth", 00:06:18.007 "iscsi_get_options", 00:06:18.007 "iscsi_target_node_request_logout", 00:06:18.007 "iscsi_target_node_set_redirect", 00:06:18.007 "iscsi_target_node_set_auth", 00:06:18.007 "iscsi_target_node_add_lun", 00:06:18.007 "iscsi_get_stats", 00:06:18.007 "iscsi_get_connections", 00:06:18.007 "iscsi_portal_group_set_auth", 00:06:18.007 "iscsi_start_portal_group", 00:06:18.007 "iscsi_delete_portal_group", 00:06:18.007 "iscsi_create_portal_group", 00:06:18.007 "iscsi_get_portal_groups", 00:06:18.007 "iscsi_delete_target_node", 00:06:18.007 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.007 "iscsi_target_node_add_pg_ig_maps", 00:06:18.007 "iscsi_create_target_node", 00:06:18.007 "iscsi_get_target_nodes", 00:06:18.007 "iscsi_delete_initiator_group", 00:06:18.007 "iscsi_initiator_group_remove_initiators", 00:06:18.007 "iscsi_initiator_group_add_initiators", 00:06:18.007 "iscsi_create_initiator_group", 00:06:18.007 "iscsi_get_initiator_groups", 00:06:18.007 "nvmf_set_crdt", 00:06:18.007 "nvmf_set_config", 00:06:18.007 "nvmf_set_max_subsystems", 00:06:18.007 "nvmf_stop_mdns_prr", 00:06:18.007 "nvmf_publish_mdns_prr", 00:06:18.007 "nvmf_subsystem_get_listeners", 00:06:18.007 "nvmf_subsystem_get_qpairs", 00:06:18.007 "nvmf_subsystem_get_controllers", 00:06:18.007 "nvmf_get_stats", 00:06:18.007 "nvmf_get_transports", 00:06:18.007 "nvmf_create_transport", 00:06:18.007 "nvmf_get_targets", 00:06:18.007 "nvmf_delete_target", 00:06:18.007 "nvmf_create_target", 00:06:18.007 "nvmf_subsystem_allow_any_host", 00:06:18.007 "nvmf_subsystem_set_keys", 00:06:18.007 "nvmf_subsystem_remove_host", 00:06:18.007 "nvmf_subsystem_add_host", 00:06:18.007 "nvmf_ns_remove_host", 00:06:18.007 "nvmf_ns_add_host", 00:06:18.007 "nvmf_subsystem_remove_ns", 00:06:18.007 "nvmf_subsystem_set_ns_ana_group", 00:06:18.007 "nvmf_subsystem_add_ns", 00:06:18.007 "nvmf_subsystem_listener_set_ana_state", 00:06:18.007 "nvmf_discovery_get_referrals", 00:06:18.007 "nvmf_discovery_remove_referral", 00:06:18.007 "nvmf_discovery_add_referral", 00:06:18.007 "nvmf_subsystem_remove_listener", 00:06:18.007 "nvmf_subsystem_add_listener", 00:06:18.007 "nvmf_delete_subsystem", 00:06:18.007 "nvmf_create_subsystem", 00:06:18.007 "nvmf_get_subsystems", 00:06:18.007 "env_dpdk_get_mem_stats", 00:06:18.007 "nbd_get_disks", 00:06:18.007 "nbd_stop_disk", 00:06:18.007 "nbd_start_disk", 00:06:18.007 "ublk_recover_disk", 00:06:18.007 "ublk_get_disks", 00:06:18.007 "ublk_stop_disk", 00:06:18.007 "ublk_start_disk", 00:06:18.007 "ublk_destroy_target", 00:06:18.007 "ublk_create_target", 00:06:18.007 "virtio_blk_create_transport", 00:06:18.007 "virtio_blk_get_transports", 00:06:18.007 "vhost_controller_set_coalescing", 00:06:18.007 "vhost_get_controllers", 00:06:18.007 "vhost_delete_controller", 00:06:18.007 "vhost_create_blk_controller", 00:06:18.007 "vhost_scsi_controller_remove_target", 00:06:18.007 "vhost_scsi_controller_add_target", 00:06:18.007 "vhost_start_scsi_controller", 00:06:18.007 "vhost_create_scsi_controller", 00:06:18.007 "thread_set_cpumask", 00:06:18.007 "scheduler_set_options", 00:06:18.007 "framework_get_governor", 00:06:18.007 "framework_get_scheduler", 00:06:18.007 "framework_set_scheduler", 00:06:18.007 "framework_get_reactors", 00:06:18.007 "thread_get_io_channels", 00:06:18.007 "thread_get_pollers", 00:06:18.007 "thread_get_stats", 00:06:18.007 "framework_monitor_context_switch", 00:06:18.007 "spdk_kill_instance", 00:06:18.007 "log_enable_timestamps", 00:06:18.007 "log_get_flags", 00:06:18.007 "log_clear_flag", 00:06:18.007 "log_set_flag", 00:06:18.007 "log_get_level", 00:06:18.007 "log_set_level", 00:06:18.007 "log_get_print_level", 00:06:18.007 "log_set_print_level", 00:06:18.007 "framework_enable_cpumask_locks", 00:06:18.007 "framework_disable_cpumask_locks", 00:06:18.007 "framework_wait_init", 00:06:18.007 "framework_start_init", 00:06:18.007 "scsi_get_devices", 00:06:18.007 "bdev_get_histogram", 00:06:18.007 "bdev_enable_histogram", 00:06:18.007 "bdev_set_qos_limit", 00:06:18.007 "bdev_set_qd_sampling_period", 00:06:18.007 "bdev_get_bdevs", 00:06:18.007 "bdev_reset_iostat", 00:06:18.007 "bdev_get_iostat", 00:06:18.007 "bdev_examine", 00:06:18.007 "bdev_wait_for_examine", 00:06:18.007 "bdev_set_options", 00:06:18.007 "accel_get_stats", 00:06:18.007 "accel_set_options", 00:06:18.007 "accel_set_driver", 00:06:18.007 "accel_crypto_key_destroy", 00:06:18.007 "accel_crypto_keys_get", 00:06:18.007 "accel_crypto_key_create", 00:06:18.007 "accel_assign_opc", 00:06:18.007 "accel_get_module_info", 00:06:18.007 "accel_get_opc_assignments", 00:06:18.007 "vmd_rescan", 00:06:18.007 "vmd_remove_device", 00:06:18.007 "vmd_enable", 00:06:18.007 "sock_get_default_impl", 00:06:18.007 "sock_set_default_impl", 00:06:18.007 "sock_impl_set_options", 00:06:18.007 "sock_impl_get_options", 00:06:18.007 "iobuf_get_stats", 00:06:18.007 "iobuf_set_options", 00:06:18.007 "keyring_get_keys", 00:06:18.007 "framework_get_pci_devices", 00:06:18.007 "framework_get_config", 00:06:18.007 "framework_get_subsystems", 00:06:18.007 "fsdev_set_opts", 00:06:18.007 "fsdev_get_opts", 00:06:18.007 "trace_get_info", 00:06:18.007 "trace_get_tpoint_group_mask", 00:06:18.007 "trace_disable_tpoint_group", 00:06:18.007 "trace_enable_tpoint_group", 00:06:18.007 "trace_clear_tpoint_mask", 00:06:18.007 "trace_set_tpoint_mask", 00:06:18.007 "notify_get_notifications", 00:06:18.007 "notify_get_types", 00:06:18.007 "spdk_get_version", 00:06:18.007 "rpc_get_methods" 00:06:18.007 ] 00:06:18.007 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.007 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.007 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58704 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58704 ']' 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58704 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58704 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58704' 00:06:18.007 killing process with pid 58704 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58704 00:06:18.007 10:18:17 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58704 00:06:20.545 00:06:20.545 real 0m4.372s 00:06:20.545 user 0m7.650s 00:06:20.545 sys 0m0.719s 00:06:20.545 10:18:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.545 10:18:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.545 ************************************ 00:06:20.545 END TEST spdkcli_tcp 00:06:20.545 ************************************ 00:06:20.545 10:18:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.545 10:18:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.545 10:18:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.545 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:06:20.806 ************************************ 00:06:20.806 START TEST dpdk_mem_utility 00:06:20.806 ************************************ 00:06:20.806 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.806 * Looking for test storage... 00:06:20.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:20.806 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.806 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.806 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.806 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:20.806 10:18:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.807 10:18:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.807 --rc genhtml_branch_coverage=1 00:06:20.807 --rc genhtml_function_coverage=1 00:06:20.807 --rc genhtml_legend=1 00:06:20.807 --rc geninfo_all_blocks=1 00:06:20.807 --rc geninfo_unexecuted_blocks=1 00:06:20.807 00:06:20.807 ' 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.807 --rc genhtml_branch_coverage=1 00:06:20.807 --rc genhtml_function_coverage=1 00:06:20.807 --rc genhtml_legend=1 00:06:20.807 --rc geninfo_all_blocks=1 00:06:20.807 --rc geninfo_unexecuted_blocks=1 00:06:20.807 00:06:20.807 ' 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.807 --rc genhtml_branch_coverage=1 00:06:20.807 --rc genhtml_function_coverage=1 00:06:20.807 --rc genhtml_legend=1 00:06:20.807 --rc geninfo_all_blocks=1 00:06:20.807 --rc geninfo_unexecuted_blocks=1 00:06:20.807 00:06:20.807 ' 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.807 --rc genhtml_branch_coverage=1 00:06:20.807 --rc genhtml_function_coverage=1 00:06:20.807 --rc genhtml_legend=1 00:06:20.807 --rc geninfo_all_blocks=1 00:06:20.807 --rc geninfo_unexecuted_blocks=1 00:06:20.807 00:06:20.807 ' 00:06:20.807 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:20.807 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58831 00:06:20.807 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.807 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58831 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58831 ']' 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.807 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.067 [2024-12-07 10:18:20.260131] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:21.067 [2024-12-07 10:18:20.260459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58831 ] 00:06:21.326 [2024-12-07 10:18:20.448163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.326 [2024-12-07 10:18:20.588541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.705 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.705 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:22.705 10:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:22.705 10:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:22.705 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.705 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.705 { 00:06:22.705 "filename": "/tmp/spdk_mem_dump.txt" 00:06:22.705 } 00:06:22.705 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.705 10:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:22.705 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:22.705 1 heaps totaling size 824.000000 MiB 00:06:22.705 size: 824.000000 MiB heap id: 0 00:06:22.705 end heaps---------- 00:06:22.705 9 mempools totaling size 603.782043 MiB 00:06:22.705 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:22.705 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:22.705 size: 100.555481 MiB name: bdev_io_58831 00:06:22.705 size: 50.003479 MiB name: msgpool_58831 00:06:22.705 size: 36.509338 MiB name: fsdev_io_58831 00:06:22.705 size: 21.763794 MiB name: PDU_Pool 00:06:22.705 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:22.705 size: 4.133484 MiB name: evtpool_58831 00:06:22.705 size: 0.026123 MiB name: Session_Pool 00:06:22.705 end mempools------- 00:06:22.705 6 memzones totaling size 4.142822 MiB 00:06:22.705 size: 1.000366 MiB name: RG_ring_0_58831 00:06:22.705 size: 1.000366 MiB name: RG_ring_1_58831 00:06:22.705 size: 1.000366 MiB name: RG_ring_4_58831 00:06:22.705 size: 1.000366 MiB name: RG_ring_5_58831 00:06:22.705 size: 0.125366 MiB name: RG_ring_2_58831 00:06:22.705 size: 0.015991 MiB name: RG_ring_3_58831 00:06:22.705 end memzones------- 00:06:22.705 10:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:22.705 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:06:22.705 list of free elements. size: 16.778687 MiB 00:06:22.705 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:22.705 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:22.705 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:22.705 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:22.705 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:22.705 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:22.705 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:22.705 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:22.705 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:22.705 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:22.705 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:22.705 element at address: 0x20001b400000 with size: 0.559998 MiB 00:06:22.705 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:22.705 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:22.705 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:22.705 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:22.705 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:22.705 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:22.706 list of standard malloc elements. size: 199.290405 MiB 00:06:22.706 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:22.706 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:22.706 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:22.706 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:22.706 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:22.706 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:22.706 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:22.706 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:22.706 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:22.706 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:22.706 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:22.706 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:06:22.706 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:22.707 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:22.707 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:22.707 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:22.708 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:22.708 list of memzone associated elements. size: 607.930908 MiB 00:06:22.708 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:22.708 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:22.708 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:22.708 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:22.708 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:22.708 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58831_0 00:06:22.708 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:22.708 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58831_0 00:06:22.708 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:22.708 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58831_0 00:06:22.708 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:22.708 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:22.708 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:22.708 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:22.708 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:22.708 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58831_0 00:06:22.708 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:22.708 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58831 00:06:22.708 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:22.708 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58831 00:06:22.708 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:22.708 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:22.708 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:22.708 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:22.708 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:22.708 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:22.708 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:22.708 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:22.708 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:22.708 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58831 00:06:22.708 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:22.708 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58831 00:06:22.708 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:22.708 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58831 00:06:22.708 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:22.708 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58831 00:06:22.708 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:22.708 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58831 00:06:22.708 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:22.708 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58831 00:06:22.708 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:22.708 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:22.708 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:22.708 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:22.708 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:22.708 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:22.708 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:22.708 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58831 00:06:22.708 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:22.708 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58831 00:06:22.708 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:22.708 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:22.708 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:22.708 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:22.708 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:22.708 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58831 00:06:22.708 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:22.708 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:22.708 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:22.708 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58831 00:06:22.708 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:22.708 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58831 00:06:22.708 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:22.708 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58831 00:06:22.708 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:22.708 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:22.708 10:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:22.708 10:18:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58831 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58831 ']' 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58831 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58831 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58831' 00:06:22.708 killing process with pid 58831 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58831 00:06:22.708 10:18:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58831 00:06:25.240 00:06:25.240 real 0m4.406s 00:06:25.240 user 0m4.031s 00:06:25.240 sys 0m0.839s 00:06:25.240 ************************************ 00:06:25.240 END TEST dpdk_mem_utility 00:06:25.240 ************************************ 00:06:25.240 10:18:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.240 10:18:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.240 10:18:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:25.240 10:18:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.240 10:18:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.240 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:06:25.240 ************************************ 00:06:25.240 START TEST event 00:06:25.240 ************************************ 00:06:25.240 10:18:24 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:25.240 * Looking for test storage... 00:06:25.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.240 10:18:24 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.240 10:18:24 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.240 10:18:24 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.499 10:18:24 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.499 10:18:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.499 10:18:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.499 10:18:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.499 10:18:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.499 10:18:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.499 10:18:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.499 10:18:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.499 10:18:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.499 10:18:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.499 10:18:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.499 10:18:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.499 10:18:24 event -- scripts/common.sh@344 -- # case "$op" in 00:06:25.499 10:18:24 event -- scripts/common.sh@345 -- # : 1 00:06:25.499 10:18:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.499 10:18:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.499 10:18:24 event -- scripts/common.sh@365 -- # decimal 1 00:06:25.499 10:18:24 event -- scripts/common.sh@353 -- # local d=1 00:06:25.499 10:18:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.499 10:18:24 event -- scripts/common.sh@355 -- # echo 1 00:06:25.499 10:18:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.499 10:18:24 event -- scripts/common.sh@366 -- # decimal 2 00:06:25.499 10:18:24 event -- scripts/common.sh@353 -- # local d=2 00:06:25.499 10:18:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.499 10:18:24 event -- scripts/common.sh@355 -- # echo 2 00:06:25.499 10:18:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.499 10:18:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.499 10:18:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.499 10:18:24 event -- scripts/common.sh@368 -- # return 0 00:06:25.499 10:18:24 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.499 10:18:24 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.499 --rc genhtml_branch_coverage=1 00:06:25.499 --rc genhtml_function_coverage=1 00:06:25.499 --rc genhtml_legend=1 00:06:25.499 --rc geninfo_all_blocks=1 00:06:25.499 --rc geninfo_unexecuted_blocks=1 00:06:25.499 00:06:25.499 ' 00:06:25.499 10:18:24 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.499 --rc genhtml_branch_coverage=1 00:06:25.499 --rc genhtml_function_coverage=1 00:06:25.499 --rc genhtml_legend=1 00:06:25.499 --rc geninfo_all_blocks=1 00:06:25.499 --rc geninfo_unexecuted_blocks=1 00:06:25.499 00:06:25.499 ' 00:06:25.499 10:18:24 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.499 --rc genhtml_branch_coverage=1 00:06:25.499 --rc genhtml_function_coverage=1 00:06:25.499 --rc genhtml_legend=1 00:06:25.499 --rc geninfo_all_blocks=1 00:06:25.499 --rc geninfo_unexecuted_blocks=1 00:06:25.499 00:06:25.499 ' 00:06:25.499 10:18:24 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.499 --rc genhtml_branch_coverage=1 00:06:25.500 --rc genhtml_function_coverage=1 00:06:25.500 --rc genhtml_legend=1 00:06:25.500 --rc geninfo_all_blocks=1 00:06:25.500 --rc geninfo_unexecuted_blocks=1 00:06:25.500 00:06:25.500 ' 00:06:25.500 10:18:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:25.500 10:18:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.500 10:18:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.500 10:18:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:25.500 10:18:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.500 10:18:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.500 ************************************ 00:06:25.500 START TEST event_perf 00:06:25.500 ************************************ 00:06:25.500 10:18:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.500 Running I/O for 1 seconds...[2024-12-07 10:18:24.679102] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:25.500 [2024-12-07 10:18:24.679333] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:06:25.759 [2024-12-07 10:18:24.866283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.759 [2024-12-07 10:18:25.002797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.759 [2024-12-07 10:18:25.003060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.759 [2024-12-07 10:18:25.003202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.759 [2024-12-07 10:18:25.003213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.135 Running I/O for 1 seconds... 00:06:27.135 lcore 0: 107464 00:06:27.135 lcore 1: 107466 00:06:27.135 lcore 2: 107462 00:06:27.135 lcore 3: 107461 00:06:27.135 done. 00:06:27.135 00:06:27.135 real 0m1.627s 00:06:27.135 user 0m4.355s 00:06:27.135 sys 0m0.148s 00:06:27.135 ************************************ 00:06:27.135 END TEST event_perf 00:06:27.135 ************************************ 00:06:27.135 10:18:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.135 10:18:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.135 10:18:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:27.135 10:18:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:27.135 10:18:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.135 10:18:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.135 ************************************ 00:06:27.135 START TEST event_reactor 00:06:27.135 ************************************ 00:06:27.135 10:18:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:27.135 [2024-12-07 10:18:26.393132] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:27.135 [2024-12-07 10:18:26.393282] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:06:27.393 [2024-12-07 10:18:26.579850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.393 [2024-12-07 10:18:26.720116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.771 test_start 00:06:28.771 oneshot 00:06:28.771 tick 100 00:06:28.771 tick 100 00:06:28.771 tick 250 00:06:28.771 tick 100 00:06:28.771 tick 100 00:06:28.771 tick 100 00:06:28.771 tick 250 00:06:28.771 tick 500 00:06:28.771 tick 100 00:06:28.771 tick 100 00:06:28.771 tick 250 00:06:28.772 tick 100 00:06:28.772 tick 100 00:06:28.772 test_end 00:06:28.772 00:06:28.772 real 0m1.607s 00:06:28.772 user 0m1.368s 00:06:28.772 sys 0m0.129s 00:06:28.772 ************************************ 00:06:28.772 END TEST event_reactor 00:06:28.772 ************************************ 00:06:28.772 10:18:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.772 10:18:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:28.772 10:18:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.772 10:18:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:28.772 10:18:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.772 10:18:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.772 ************************************ 00:06:28.772 START TEST event_reactor_perf 00:06:28.772 ************************************ 00:06:28.772 10:18:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.772 [2024-12-07 10:18:28.077139] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:28.772 [2024-12-07 10:18:28.077448] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 00:06:29.031 [2024-12-07 10:18:28.262218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.290 [2024-12-07 10:18:28.395101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.670 test_start 00:06:30.670 test_end 00:06:30.670 Performance: 404809 events per second 00:06:30.670 00:06:30.670 real 0m1.605s 00:06:30.670 user 0m1.379s 00:06:30.670 sys 0m0.116s 00:06:30.670 ************************************ 00:06:30.670 END TEST event_reactor_perf 00:06:30.670 ************************************ 00:06:30.670 10:18:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.670 10:18:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.670 10:18:29 event -- event/event.sh@49 -- # uname -s 00:06:30.670 10:18:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.670 10:18:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:30.670 10:18:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.670 10:18:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.670 10:18:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.670 ************************************ 00:06:30.670 START TEST event_scheduler 00:06:30.670 ************************************ 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:30.670 * Looking for test storage... 00:06:30.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.670 10:18:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.670 --rc genhtml_branch_coverage=1 00:06:30.670 --rc genhtml_function_coverage=1 00:06:30.670 --rc genhtml_legend=1 00:06:30.670 --rc geninfo_all_blocks=1 00:06:30.670 --rc geninfo_unexecuted_blocks=1 00:06:30.670 00:06:30.670 ' 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.670 --rc genhtml_branch_coverage=1 00:06:30.670 --rc genhtml_function_coverage=1 00:06:30.670 --rc genhtml_legend=1 00:06:30.670 --rc geninfo_all_blocks=1 00:06:30.670 --rc geninfo_unexecuted_blocks=1 00:06:30.670 00:06:30.670 ' 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.670 --rc genhtml_branch_coverage=1 00:06:30.670 --rc genhtml_function_coverage=1 00:06:30.670 --rc genhtml_legend=1 00:06:30.670 --rc geninfo_all_blocks=1 00:06:30.670 --rc geninfo_unexecuted_blocks=1 00:06:30.670 00:06:30.670 ' 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.670 --rc genhtml_branch_coverage=1 00:06:30.670 --rc genhtml_function_coverage=1 00:06:30.670 --rc genhtml_legend=1 00:06:30.670 --rc geninfo_all_blocks=1 00:06:30.670 --rc geninfo_unexecuted_blocks=1 00:06:30.670 00:06:30.670 ' 00:06:30.670 10:18:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.670 10:18:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59097 00:06:30.670 10:18:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.670 10:18:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.670 10:18:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59097 00:06:30.670 10:18:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59097 ']' 00:06:30.671 10:18:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.671 10:18:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.671 10:18:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.671 10:18:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.671 10:18:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.930 [2024-12-07 10:18:30.062728] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:30.930 [2024-12-07 10:18:30.063007] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 00:06:30.930 [2024-12-07 10:18:30.242732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.190 [2024-12-07 10:18:30.357719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.190 [2024-12-07 10:18:30.358154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.190 [2024-12-07 10:18:30.357942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.190 [2024-12-07 10:18:30.358147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:31.760 10:18:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:31.760 POWER: Cannot set governor of lcore 0 to userspace 00:06:31.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:31.760 POWER: Cannot set governor of lcore 0 to performance 00:06:31.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:31.760 POWER: Cannot set governor of lcore 0 to userspace 00:06:31.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:31.760 POWER: Cannot set governor of lcore 0 to userspace 00:06:31.760 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:31.760 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:31.760 POWER: Unable to set Power Management Environment for lcore 0 00:06:31.760 [2024-12-07 10:18:30.900641] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:31.760 [2024-12-07 10:18:30.900669] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:31.760 [2024-12-07 10:18:30.900681] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:31.760 [2024-12-07 10:18:30.900702] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:31.760 [2024-12-07 10:18:30.900713] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:31.760 [2024-12-07 10:18:30.900725] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.760 10:18:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.760 10:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 [2024-12-07 10:18:31.210618] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:32.020 10:18:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:32.020 10:18:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.020 10:18:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 ************************************ 00:06:32.020 START TEST scheduler_create_thread 00:06:32.020 ************************************ 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 2 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 3 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 4 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 5 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 6 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 7 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.020 8 00:06:32.020 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 9 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 10 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.021 10:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.400 10:18:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.400 10:18:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.400 10:18:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.400 10:18:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.400 10:18:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.337 10:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.337 10:18:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.337 10:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.337 10:18:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.275 10:18:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.275 10:18:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:35.275 10:18:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:35.275 10:18:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.275 10:18:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.843 ************************************ 00:06:35.843 END TEST scheduler_create_thread 00:06:35.843 ************************************ 00:06:35.843 10:18:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.843 00:06:35.843 real 0m3.883s 00:06:35.843 user 0m0.026s 00:06:35.843 sys 0m0.007s 00:06:35.843 10:18:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.843 10:18:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.843 10:18:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.843 10:18:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59097 00:06:35.843 10:18:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59097 ']' 00:06:35.843 10:18:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59097 00:06:35.843 10:18:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:35.843 10:18:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.843 10:18:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59097 00:06:36.103 killing process with pid 59097 00:06:36.103 10:18:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:36.103 10:18:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:36.103 10:18:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59097' 00:06:36.103 10:18:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59097 00:06:36.103 10:18:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59097 00:06:36.361 [2024-12-07 10:18:35.488307] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.298 00:06:37.298 real 0m6.902s 00:06:37.298 user 0m14.177s 00:06:37.298 sys 0m0.574s 00:06:37.298 10:18:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.298 10:18:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.298 ************************************ 00:06:37.298 END TEST event_scheduler 00:06:37.298 ************************************ 00:06:37.557 10:18:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.557 10:18:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.557 10:18:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.557 10:18:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.557 10:18:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 ************************************ 00:06:37.557 START TEST app_repeat 00:06:37.557 ************************************ 00:06:37.557 10:18:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59214 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.557 Process app_repeat pid: 59214 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59214' 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.557 spdk_app_start Round 0 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.557 10:18:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59214 /var/tmp/spdk-nbd.sock 00:06:37.557 10:18:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:06:37.557 10:18:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.557 10:18:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.557 10:18:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.558 10:18:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.558 10:18:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 [2024-12-07 10:18:36.780056] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:37.558 [2024-12-07 10:18:36.780174] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:06:37.817 [2024-12-07 10:18:36.979423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.817 [2024-12-07 10:18:37.118166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.817 [2024-12-07 10:18:37.118205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.385 10:18:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.385 10:18:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:38.386 10:18:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.645 Malloc0 00:06:38.645 10:18:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.904 Malloc1 00:06:38.904 10:18:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.904 10:18:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.904 10:18:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.905 10:18:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.164 /dev/nbd0 00:06:39.164 10:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.164 10:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.164 1+0 records in 00:06:39.164 1+0 records out 00:06:39.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249058 s, 16.4 MB/s 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.164 10:18:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:39.164 10:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.164 10:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.164 10:18:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.424 /dev/nbd1 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.424 1+0 records in 00:06:39.424 1+0 records out 00:06:39.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381732 s, 10.7 MB/s 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.424 10:18:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.424 10:18:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.735 { 00:06:39.735 "nbd_device": "/dev/nbd0", 00:06:39.735 "bdev_name": "Malloc0" 00:06:39.735 }, 00:06:39.735 { 00:06:39.735 "nbd_device": "/dev/nbd1", 00:06:39.735 "bdev_name": "Malloc1" 00:06:39.735 } 00:06:39.735 ]' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.735 { 00:06:39.735 "nbd_device": "/dev/nbd0", 00:06:39.735 "bdev_name": "Malloc0" 00:06:39.735 }, 00:06:39.735 { 00:06:39.735 "nbd_device": "/dev/nbd1", 00:06:39.735 "bdev_name": "Malloc1" 00:06:39.735 } 00:06:39.735 ]' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.735 /dev/nbd1' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.735 /dev/nbd1' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.735 10:18:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.735 256+0 records in 00:06:39.735 256+0 records out 00:06:39.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012444 s, 84.3 MB/s 00:06:39.736 10:18:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.736 10:18:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.736 256+0 records in 00:06:39.736 256+0 records out 00:06:39.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283924 s, 36.9 MB/s 00:06:39.736 10:18:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.736 10:18:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.736 256+0 records in 00:06:39.736 256+0 records out 00:06:39.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341053 s, 30.7 MB/s 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.736 10:18:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.022 10:18:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.281 10:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.540 10:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.540 10:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.540 10:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.540 10:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.540 10:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.540 10:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.541 10:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.541 10:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.541 10:18:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.541 10:18:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.541 10:18:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.541 10:18:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.541 10:18:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.108 10:18:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.044 [2024-12-07 10:18:41.367649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.301 [2024-12-07 10:18:41.487205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.301 [2024-12-07 10:18:41.487215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.558 [2024-12-07 10:18:41.707692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.558 [2024-12-07 10:18:41.707794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.932 spdk_app_start Round 1 00:06:43.932 10:18:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.932 10:18:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:43.932 10:18:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59214 /var/tmp/spdk-nbd.sock 00:06:43.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.932 10:18:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:06:43.932 10:18:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.932 10:18:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.932 10:18:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.932 10:18:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.932 10:18:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.191 10:18:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.191 10:18:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:44.191 10:18:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.449 Malloc0 00:06:44.449 10:18:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.708 Malloc1 00:06:44.708 10:18:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.708 10:18:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.966 /dev/nbd0 00:06:44.966 10:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.966 10:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.966 1+0 records in 00:06:44.966 1+0 records out 00:06:44.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427266 s, 9.6 MB/s 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.966 10:18:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:44.966 10:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.966 10:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.966 10:18:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.225 /dev/nbd1 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.225 1+0 records in 00:06:45.225 1+0 records out 00:06:45.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027183 s, 15.1 MB/s 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.225 10:18:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.225 10:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.484 { 00:06:45.484 "nbd_device": "/dev/nbd0", 00:06:45.484 "bdev_name": "Malloc0" 00:06:45.484 }, 00:06:45.484 { 00:06:45.484 "nbd_device": "/dev/nbd1", 00:06:45.484 "bdev_name": "Malloc1" 00:06:45.484 } 00:06:45.484 ]' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.484 { 00:06:45.484 "nbd_device": "/dev/nbd0", 00:06:45.484 "bdev_name": "Malloc0" 00:06:45.484 }, 00:06:45.484 { 00:06:45.484 "nbd_device": "/dev/nbd1", 00:06:45.484 "bdev_name": "Malloc1" 00:06:45.484 } 00:06:45.484 ]' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.484 /dev/nbd1' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.484 /dev/nbd1' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.484 256+0 records in 00:06:45.484 256+0 records out 00:06:45.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666042 s, 157 MB/s 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.484 256+0 records in 00:06:45.484 256+0 records out 00:06:45.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321709 s, 32.6 MB/s 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.484 256+0 records in 00:06:45.484 256+0 records out 00:06:45.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326556 s, 32.1 MB/s 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.484 10:18:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.743 10:18:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.743 10:18:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.001 10:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.259 10:18:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.259 10:18:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.826 10:18:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.205 [2024-12-07 10:18:47.129145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.205 [2024-12-07 10:18:47.247905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.205 [2024-12-07 10:18:47.247916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.205 [2024-12-07 10:18:47.469095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.205 [2024-12-07 10:18:47.469191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.581 spdk_app_start Round 2 00:06:49.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.581 10:18:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.581 10:18:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:49.581 10:18:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59214 /var/tmp/spdk-nbd.sock 00:06:49.581 10:18:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:06:49.581 10:18:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.581 10:18:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.581 10:18:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.581 10:18:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.581 10:18:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.840 10:18:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.840 10:18:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:49.840 10:18:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.098 Malloc0 00:06:50.098 10:18:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.358 Malloc1 00:06:50.358 10:18:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.358 10:18:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.617 /dev/nbd0 00:06:50.617 10:18:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.617 10:18:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.617 1+0 records in 00:06:50.617 1+0 records out 00:06:50.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390688 s, 10.5 MB/s 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.617 10:18:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.617 10:18:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.617 10:18:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.617 10:18:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.876 /dev/nbd1 00:06:50.876 10:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.876 10:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.876 10:18:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.877 1+0 records in 00:06:50.877 1+0 records out 00:06:50.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443976 s, 9.2 MB/s 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.877 10:18:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.877 10:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.877 10:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.877 10:18:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.877 10:18:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.877 10:18:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.135 10:18:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.135 { 00:06:51.135 "nbd_device": "/dev/nbd0", 00:06:51.135 "bdev_name": "Malloc0" 00:06:51.135 }, 00:06:51.135 { 00:06:51.135 "nbd_device": "/dev/nbd1", 00:06:51.135 "bdev_name": "Malloc1" 00:06:51.135 } 00:06:51.135 ]' 00:06:51.135 10:18:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.135 { 00:06:51.135 "nbd_device": "/dev/nbd0", 00:06:51.135 "bdev_name": "Malloc0" 00:06:51.135 }, 00:06:51.135 { 00:06:51.135 "nbd_device": "/dev/nbd1", 00:06:51.136 "bdev_name": "Malloc1" 00:06:51.136 } 00:06:51.136 ]' 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.136 /dev/nbd1' 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.136 /dev/nbd1' 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.136 256+0 records in 00:06:51.136 256+0 records out 00:06:51.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124618 s, 84.1 MB/s 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.136 10:18:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.394 256+0 records in 00:06:51.394 256+0 records out 00:06:51.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321614 s, 32.6 MB/s 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.394 256+0 records in 00:06:51.394 256+0 records out 00:06:51.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0427574 s, 24.5 MB/s 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.394 10:18:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.395 10:18:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.395 10:18:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.395 10:18:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.395 10:18:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.395 10:18:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.653 10:18:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.654 10:18:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.912 10:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.172 10:18:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.172 10:18:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.431 10:18:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.809 [2024-12-07 10:18:52.894368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.809 [2024-12-07 10:18:53.013421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.809 [2024-12-07 10:18:53.013433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.068 [2024-12-07 10:18:53.232632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.068 [2024-12-07 10:18:53.232732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.446 10:18:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59214 /var/tmp/spdk-nbd.sock 00:06:55.446 10:18:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:06:55.446 10:18:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.446 10:18:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.446 10:18:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.446 10:18:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.446 10:18:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:55.705 10:18:54 event.app_repeat -- event/event.sh@39 -- # killprocess 59214 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59214 ']' 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59214 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59214 00:06:55.705 killing process with pid 59214 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59214' 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59214 00:06:55.705 10:18:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59214 00:06:57.082 spdk_app_start is called in Round 0. 00:06:57.082 Shutdown signal received, stop current app iteration 00:06:57.082 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:06:57.082 spdk_app_start is called in Round 1. 00:06:57.082 Shutdown signal received, stop current app iteration 00:06:57.082 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:06:57.082 spdk_app_start is called in Round 2. 00:06:57.082 Shutdown signal received, stop current app iteration 00:06:57.082 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:06:57.082 spdk_app_start is called in Round 3. 00:06:57.082 Shutdown signal received, stop current app iteration 00:06:57.082 ************************************ 00:06:57.082 END TEST app_repeat 00:06:57.082 ************************************ 00:06:57.082 10:18:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:57.082 10:18:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:57.082 00:06:57.082 real 0m19.350s 00:06:57.082 user 0m40.396s 00:06:57.082 sys 0m3.434s 00:06:57.082 10:18:56 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.082 10:18:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.082 10:18:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:57.082 10:18:56 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:57.082 10:18:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.082 10:18:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.082 10:18:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.082 ************************************ 00:06:57.082 START TEST cpu_locks 00:06:57.082 ************************************ 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:57.082 * Looking for test storage... 00:06:57.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.082 10:18:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.082 --rc genhtml_branch_coverage=1 00:06:57.082 --rc genhtml_function_coverage=1 00:06:57.082 --rc genhtml_legend=1 00:06:57.082 --rc geninfo_all_blocks=1 00:06:57.082 --rc geninfo_unexecuted_blocks=1 00:06:57.082 00:06:57.082 ' 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.082 --rc genhtml_branch_coverage=1 00:06:57.082 --rc genhtml_function_coverage=1 00:06:57.082 --rc genhtml_legend=1 00:06:57.082 --rc geninfo_all_blocks=1 00:06:57.082 --rc geninfo_unexecuted_blocks=1 00:06:57.082 00:06:57.082 ' 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.082 --rc genhtml_branch_coverage=1 00:06:57.082 --rc genhtml_function_coverage=1 00:06:57.082 --rc genhtml_legend=1 00:06:57.082 --rc geninfo_all_blocks=1 00:06:57.082 --rc geninfo_unexecuted_blocks=1 00:06:57.082 00:06:57.082 ' 00:06:57.082 10:18:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.082 --rc genhtml_branch_coverage=1 00:06:57.082 --rc genhtml_function_coverage=1 00:06:57.082 --rc genhtml_legend=1 00:06:57.082 --rc geninfo_all_blocks=1 00:06:57.082 --rc geninfo_unexecuted_blocks=1 00:06:57.082 00:06:57.082 ' 00:06:57.082 10:18:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:57.082 10:18:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:57.082 10:18:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:57.082 10:18:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:57.083 10:18:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.083 10:18:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.083 10:18:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.083 ************************************ 00:06:57.083 START TEST default_locks 00:06:57.083 ************************************ 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59661 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59661 00:06:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59661 ']' 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.083 10:18:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.342 [2024-12-07 10:18:56.510925] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:57.342 [2024-12-07 10:18:56.511289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:06:57.602 [2024-12-07 10:18:56.696030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.602 [2024-12-07 10:18:56.828401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.538 10:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.538 10:18:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:58.538 10:18:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59661 00:06:58.538 10:18:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59661 00:06:58.538 10:18:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59661 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59661 ']' 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59661 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59661 00:06:59.474 killing process with pid 59661 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59661' 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59661 00:06:59.474 10:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59661 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59661 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59661 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59661 00:07:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59661 ']' 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.043 ERROR: process (pid: 59661) is no longer running 00:07:02.043 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59661) - No such process 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:02.043 ************************************ 00:07:02.043 END TEST default_locks 00:07:02.043 ************************************ 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.043 00:07:02.043 real 0m4.727s 00:07:02.043 user 0m4.494s 00:07:02.043 sys 0m1.020s 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.043 10:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.043 10:19:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:02.043 10:19:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.043 10:19:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.043 10:19:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.043 ************************************ 00:07:02.043 START TEST default_locks_via_rpc 00:07:02.043 ************************************ 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59742 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59742 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59742 ']' 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.043 10:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.043 [2024-12-07 10:19:01.324374] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:02.043 [2024-12-07 10:19:01.324502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59742 ] 00:07:02.303 [2024-12-07 10:19:01.506953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.303 [2024-12-07 10:19:01.636406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59742 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59742 00:07:03.679 10:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59742 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59742 ']' 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59742 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59742 00:07:04.246 killing process with pid 59742 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59742' 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59742 00:07:04.246 10:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59742 00:07:06.782 ************************************ 00:07:06.782 END TEST default_locks_via_rpc 00:07:06.782 ************************************ 00:07:06.782 00:07:06.782 real 0m4.681s 00:07:06.782 user 0m4.436s 00:07:06.782 sys 0m1.059s 00:07:06.782 10:19:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.782 10:19:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.782 10:19:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:06.782 10:19:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.782 10:19:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.782 10:19:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.782 ************************************ 00:07:06.782 START TEST non_locking_app_on_locked_coremask 00:07:06.782 ************************************ 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59829 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59829 /var/tmp/spdk.sock 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59829 ']' 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.782 10:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.782 [2024-12-07 10:19:06.086806] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:06.782 [2024-12-07 10:19:06.086934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59829 ] 00:07:07.042 [2024-12-07 10:19:06.274448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.301 [2024-12-07 10:19:06.408328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59846 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59846 /var/tmp/spdk2.sock 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59846 ']' 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.241 10:19:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.241 [2024-12-07 10:19:07.541327] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:08.241 [2024-12-07 10:19:07.541700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59846 ] 00:07:08.500 [2024-12-07 10:19:07.722892] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.500 [2024-12-07 10:19:07.722951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.760 [2024-12-07 10:19:07.990445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.301 10:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.301 10:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.301 10:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59829 00:07:11.301 10:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59829 00:07:11.301 10:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59829 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59829 ']' 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59829 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59829 00:07:12.284 killing process with pid 59829 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59829' 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59829 00:07:12.284 10:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59829 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59846 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59846 ']' 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59846 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59846 00:07:17.577 killing process with pid 59846 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59846' 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59846 00:07:17.577 10:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59846 00:07:19.508 00:07:19.508 real 0m12.725s 00:07:19.508 user 0m12.666s 00:07:19.508 sys 0m2.076s 00:07:19.508 10:19:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.508 ************************************ 00:07:19.508 10:19:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.508 END TEST non_locking_app_on_locked_coremask 00:07:19.508 ************************************ 00:07:19.508 10:19:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:19.508 10:19:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.508 10:19:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.508 10:19:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.508 ************************************ 00:07:19.508 START TEST locking_app_on_unlocked_coremask 00:07:19.508 ************************************ 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60007 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60007 /var/tmp/spdk.sock 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60007 ']' 00:07:19.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.508 10:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.768 [2024-12-07 10:19:18.896171] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:19.768 [2024-12-07 10:19:18.896296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60007 ] 00:07:19.768 [2024-12-07 10:19:19.080245] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.768 [2024-12-07 10:19:19.080292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.026 [2024-12-07 10:19:19.184565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60023 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60023 /var/tmp/spdk2.sock 00:07:20.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60023 ']' 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.960 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.961 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.961 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.961 10:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 [2024-12-07 10:19:20.115645] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:20.961 [2024-12-07 10:19:20.115766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60023 ] 00:07:20.961 [2024-12-07 10:19:20.294229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.219 [2024-12-07 10:19:20.522971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.755 10:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.755 10:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.755 10:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60023 00:07:23.755 10:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60023 00:07:23.755 10:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60007 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60007 ']' 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60007 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60007 00:07:24.324 killing process with pid 60007 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60007' 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60007 00:07:24.324 10:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60007 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60023 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60023 ']' 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60023 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60023 00:07:29.604 killing process with pid 60023 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60023' 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60023 00:07:29.604 10:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60023 00:07:31.511 ************************************ 00:07:31.511 END TEST locking_app_on_unlocked_coremask 00:07:31.511 ************************************ 00:07:31.511 00:07:31.511 real 0m11.694s 00:07:31.511 user 0m11.943s 00:07:31.512 sys 0m1.436s 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.512 10:19:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:31.512 10:19:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.512 10:19:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.512 10:19:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.512 ************************************ 00:07:31.512 START TEST locking_app_on_locked_coremask 00:07:31.512 ************************************ 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60171 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60171 /var/tmp/spdk.sock 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60171 ']' 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.512 10:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.512 [2024-12-07 10:19:30.654330] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:31.512 [2024-12-07 10:19:30.654460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:07:31.512 [2024-12-07 10:19:30.834910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.771 [2024-12-07 10:19:30.942428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60193 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60193 /var/tmp/spdk2.sock 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60193 /var/tmp/spdk2.sock 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60193 /var/tmp/spdk2.sock 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60193 ']' 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.710 10:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.710 [2024-12-07 10:19:31.874791] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:32.710 [2024-12-07 10:19:31.874930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60193 ] 00:07:32.970 [2024-12-07 10:19:32.064665] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60171 has claimed it. 00:07:32.970 [2024-12-07 10:19:32.064728] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:33.229 ERROR: process (pid: 60193) is no longer running 00:07:33.230 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60193) - No such process 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60171 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60171 00:07:33.230 10:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.797 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60171 00:07:33.797 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60171 ']' 00:07:33.797 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60171 00:07:33.797 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.797 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.797 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60171 00:07:34.056 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.056 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.056 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60171' 00:07:34.056 killing process with pid 60171 00:07:34.056 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60171 00:07:34.056 10:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60171 00:07:36.587 ************************************ 00:07:36.587 END TEST locking_app_on_locked_coremask 00:07:36.587 ************************************ 00:07:36.587 00:07:36.587 real 0m4.919s 00:07:36.587 user 0m5.098s 00:07:36.587 sys 0m1.031s 00:07:36.587 10:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.587 10:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.587 10:19:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:36.587 10:19:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.587 10:19:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.587 10:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.587 ************************************ 00:07:36.587 START TEST locking_overlapped_coremask 00:07:36.587 ************************************ 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60263 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60263 /var/tmp/spdk.sock 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60263 ']' 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.587 10:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.587 [2024-12-07 10:19:35.657745] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:36.587 [2024-12-07 10:19:35.657881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60263 ] 00:07:36.587 [2024-12-07 10:19:35.838865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.845 [2024-12-07 10:19:35.953017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.846 [2024-12-07 10:19:35.953104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.846 [2024-12-07 10:19:35.953149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.785 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60282 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60282 /var/tmp/spdk2.sock 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60282 /var/tmp/spdk2.sock 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:37.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60282 /var/tmp/spdk2.sock 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60282 ']' 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.786 10:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.786 [2024-12-07 10:19:36.946545] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:37.786 [2024-12-07 10:19:36.946679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:07:37.786 [2024-12-07 10:19:37.134568] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60263 has claimed it. 00:07:37.786 [2024-12-07 10:19:37.134626] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.353 ERROR: process (pid: 60282) is no longer running 00:07:38.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60282) - No such process 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60263 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60263 ']' 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60263 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60263 00:07:38.353 killing process with pid 60263 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60263' 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60263 00:07:38.353 10:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60263 00:07:40.890 00:07:40.890 real 0m4.408s 00:07:40.890 user 0m11.864s 00:07:40.890 sys 0m0.667s 00:07:40.890 10:19:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.890 ************************************ 00:07:40.890 10:19:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.890 END TEST locking_overlapped_coremask 00:07:40.890 ************************************ 00:07:40.890 10:19:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:40.890 10:19:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.890 10:19:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.890 10:19:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.890 ************************************ 00:07:40.890 START TEST locking_overlapped_coremask_via_rpc 00:07:40.890 ************************************ 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60352 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60352 /var/tmp/spdk.sock 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60352 ']' 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.890 10:19:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.890 [2024-12-07 10:19:40.145779] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:40.890 [2024-12-07 10:19:40.145918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60352 ] 00:07:41.150 [2024-12-07 10:19:40.326512] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.150 [2024-12-07 10:19:40.326563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.150 [2024-12-07 10:19:40.435366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.150 [2024-12-07 10:19:40.435554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.150 [2024-12-07 10:19:40.435585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.089 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.089 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:42.089 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60370 00:07:42.089 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60370 /var/tmp/spdk2.sock 00:07:42.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.089 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:42.090 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60370 ']' 00:07:42.090 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.090 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.090 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.090 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.090 10:19:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 [2024-12-07 10:19:41.406400] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:42.090 [2024-12-07 10:19:41.406519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60370 ] 00:07:42.349 [2024-12-07 10:19:41.592859] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.349 [2024-12-07 10:19:41.592908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.608 [2024-12-07 10:19:41.868358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.608 [2024-12-07 10:19:41.872180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.608 [2024-12-07 10:19:41.872206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.143 [2024-12-07 10:19:43.965186] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60352 has claimed it. 00:07:45.143 request: 00:07:45.143 { 00:07:45.143 "method": "framework_enable_cpumask_locks", 00:07:45.143 "req_id": 1 00:07:45.143 } 00:07:45.143 Got JSON-RPC error response 00:07:45.143 response: 00:07:45.143 { 00:07:45.143 "code": -32603, 00:07:45.143 "message": "Failed to claim CPU core: 2" 00:07:45.143 } 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60352 /var/tmp/spdk.sock 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60352 ']' 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.143 10:19:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60370 /var/tmp/spdk2.sock 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60370 ']' 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.143 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:45.144 00:07:45.144 real 0m4.385s 00:07:45.144 user 0m1.250s 00:07:45.144 sys 0m0.222s 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.144 10:19:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.144 ************************************ 00:07:45.144 END TEST locking_overlapped_coremask_via_rpc 00:07:45.144 ************************************ 00:07:45.144 10:19:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:45.144 10:19:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60352 ]] 00:07:45.144 10:19:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60352 00:07:45.144 10:19:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60352 ']' 00:07:45.144 10:19:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60352 00:07:45.144 10:19:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:45.144 10:19:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.144 10:19:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60352 00:07:45.404 10:19:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.404 10:19:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.404 killing process with pid 60352 00:07:45.404 10:19:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60352' 00:07:45.404 10:19:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60352 00:07:45.404 10:19:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60352 00:07:47.940 10:19:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60370 ]] 00:07:47.940 10:19:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60370 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60370 ']' 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60370 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60370 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60370' 00:07:47.940 killing process with pid 60370 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60370 00:07:47.940 10:19:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60370 00:07:50.475 10:19:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:50.475 10:19:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:50.475 10:19:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60352 ]] 00:07:50.475 10:19:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60352 00:07:50.475 10:19:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60352 ']' 00:07:50.475 10:19:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60352 00:07:50.475 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60352) - No such process 00:07:50.475 Process with pid 60352 is not found 00:07:50.475 10:19:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60352 is not found' 00:07:50.475 10:19:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60370 ]] 00:07:50.475 10:19:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60370 00:07:50.475 10:19:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60370 ']' 00:07:50.475 10:19:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60370 00:07:50.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60370) - No such process 00:07:50.476 Process with pid 60370 is not found 00:07:50.476 10:19:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60370 is not found' 00:07:50.476 10:19:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:50.476 00:07:50.476 real 0m53.395s 00:07:50.476 user 1m27.841s 00:07:50.476 sys 0m8.999s 00:07:50.476 10:19:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.476 ************************************ 00:07:50.476 END TEST cpu_locks 00:07:50.476 ************************************ 00:07:50.476 10:19:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.476 00:07:50.476 real 1m25.211s 00:07:50.476 user 2m29.783s 00:07:50.476 sys 0m13.843s 00:07:50.476 10:19:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.476 10:19:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.476 ************************************ 00:07:50.476 END TEST event 00:07:50.476 ************************************ 00:07:50.476 10:19:49 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:50.476 10:19:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.476 10:19:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.476 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:07:50.476 ************************************ 00:07:50.476 START TEST thread 00:07:50.476 ************************************ 00:07:50.476 10:19:49 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:50.476 * Looking for test storage... 00:07:50.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:50.476 10:19:49 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.476 10:19:49 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.476 10:19:49 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.735 10:19:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.735 10:19:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.735 10:19:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.735 10:19:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.735 10:19:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.735 10:19:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.735 10:19:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.735 10:19:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.735 10:19:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.735 10:19:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.735 10:19:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.735 10:19:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:50.735 10:19:49 thread -- scripts/common.sh@345 -- # : 1 00:07:50.735 10:19:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.735 10:19:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.735 10:19:49 thread -- scripts/common.sh@365 -- # decimal 1 00:07:50.735 10:19:49 thread -- scripts/common.sh@353 -- # local d=1 00:07:50.735 10:19:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.735 10:19:49 thread -- scripts/common.sh@355 -- # echo 1 00:07:50.735 10:19:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.735 10:19:49 thread -- scripts/common.sh@366 -- # decimal 2 00:07:50.735 10:19:49 thread -- scripts/common.sh@353 -- # local d=2 00:07:50.735 10:19:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.735 10:19:49 thread -- scripts/common.sh@355 -- # echo 2 00:07:50.735 10:19:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.735 10:19:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.735 10:19:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.735 10:19:49 thread -- scripts/common.sh@368 -- # return 0 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.735 --rc genhtml_branch_coverage=1 00:07:50.735 --rc genhtml_function_coverage=1 00:07:50.735 --rc genhtml_legend=1 00:07:50.735 --rc geninfo_all_blocks=1 00:07:50.735 --rc geninfo_unexecuted_blocks=1 00:07:50.735 00:07:50.735 ' 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.735 --rc genhtml_branch_coverage=1 00:07:50.735 --rc genhtml_function_coverage=1 00:07:50.735 --rc genhtml_legend=1 00:07:50.735 --rc geninfo_all_blocks=1 00:07:50.735 --rc geninfo_unexecuted_blocks=1 00:07:50.735 00:07:50.735 ' 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.735 --rc genhtml_branch_coverage=1 00:07:50.735 --rc genhtml_function_coverage=1 00:07:50.735 --rc genhtml_legend=1 00:07:50.735 --rc geninfo_all_blocks=1 00:07:50.735 --rc geninfo_unexecuted_blocks=1 00:07:50.735 00:07:50.735 ' 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.735 --rc genhtml_branch_coverage=1 00:07:50.735 --rc genhtml_function_coverage=1 00:07:50.735 --rc genhtml_legend=1 00:07:50.735 --rc geninfo_all_blocks=1 00:07:50.735 --rc geninfo_unexecuted_blocks=1 00:07:50.735 00:07:50.735 ' 00:07:50.735 10:19:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.735 10:19:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.735 ************************************ 00:07:50.735 START TEST thread_poller_perf 00:07:50.735 ************************************ 00:07:50.735 10:19:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.735 [2024-12-07 10:19:49.955312] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:50.735 [2024-12-07 10:19:49.955430] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 00:07:50.994 [2024-12-07 10:19:50.134396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.994 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:50.994 [2024-12-07 10:19:50.239127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.475 [2024-12-07T10:19:51.828Z] ====================================== 00:07:52.475 [2024-12-07T10:19:51.828Z] busy:2497735036 (cyc) 00:07:52.475 [2024-12-07T10:19:51.828Z] total_run_count: 412000 00:07:52.475 [2024-12-07T10:19:51.828Z] tsc_hz: 2490000000 (cyc) 00:07:52.475 [2024-12-07T10:19:51.828Z] ====================================== 00:07:52.475 [2024-12-07T10:19:51.828Z] poller_cost: 6062 (cyc), 2434 (nsec) 00:07:52.475 00:07:52.475 real 0m1.559s 00:07:52.475 user 0m1.339s 00:07:52.475 sys 0m0.113s 00:07:52.475 10:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.475 ************************************ 00:07:52.475 END TEST thread_poller_perf 00:07:52.475 ************************************ 00:07:52.475 10:19:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.475 10:19:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:52.475 10:19:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:52.475 10:19:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.475 10:19:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.475 ************************************ 00:07:52.475 START TEST thread_poller_perf 00:07:52.475 ************************************ 00:07:52.475 10:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:52.475 [2024-12-07 10:19:51.599668] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:52.475 [2024-12-07 10:19:51.599798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:07:52.475 [2024-12-07 10:19:51.782292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.733 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:52.733 [2024-12-07 10:19:51.897266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.109 [2024-12-07T10:19:53.462Z] ====================================== 00:07:54.109 [2024-12-07T10:19:53.462Z] busy:2493966880 (cyc) 00:07:54.109 [2024-12-07T10:19:53.462Z] total_run_count: 5090000 00:07:54.109 [2024-12-07T10:19:53.462Z] tsc_hz: 2490000000 (cyc) 00:07:54.109 [2024-12-07T10:19:53.462Z] ====================================== 00:07:54.109 [2024-12-07T10:19:53.462Z] poller_cost: 489 (cyc), 196 (nsec) 00:07:54.109 00:07:54.109 real 0m1.564s 00:07:54.109 user 0m1.339s 00:07:54.109 sys 0m0.118s 00:07:54.109 10:19:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.109 10:19:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.109 ************************************ 00:07:54.109 END TEST thread_poller_perf 00:07:54.109 ************************************ 00:07:54.109 10:19:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:54.109 00:07:54.109 real 0m3.509s 00:07:54.109 user 0m2.852s 00:07:54.109 sys 0m0.447s 00:07:54.109 10:19:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.109 10:19:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.109 ************************************ 00:07:54.109 END TEST thread 00:07:54.109 ************************************ 00:07:54.109 10:19:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:54.109 10:19:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.109 10:19:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.109 10:19:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.109 10:19:53 -- common/autotest_common.sh@10 -- # set +x 00:07:54.109 ************************************ 00:07:54.109 START TEST app_cmdline 00:07:54.109 ************************************ 00:07:54.109 10:19:53 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.109 * Looking for test storage... 00:07:54.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.109 10:19:53 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.109 10:19:53 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.109 10:19:53 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.369 10:19:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.369 --rc genhtml_branch_coverage=1 00:07:54.369 --rc genhtml_function_coverage=1 00:07:54.369 --rc genhtml_legend=1 00:07:54.369 --rc geninfo_all_blocks=1 00:07:54.369 --rc geninfo_unexecuted_blocks=1 00:07:54.369 00:07:54.369 ' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.369 --rc genhtml_branch_coverage=1 00:07:54.369 --rc genhtml_function_coverage=1 00:07:54.369 --rc genhtml_legend=1 00:07:54.369 --rc geninfo_all_blocks=1 00:07:54.369 --rc geninfo_unexecuted_blocks=1 00:07:54.369 00:07:54.369 ' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.369 --rc genhtml_branch_coverage=1 00:07:54.369 --rc genhtml_function_coverage=1 00:07:54.369 --rc genhtml_legend=1 00:07:54.369 --rc geninfo_all_blocks=1 00:07:54.369 --rc geninfo_unexecuted_blocks=1 00:07:54.369 00:07:54.369 ' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.369 --rc genhtml_branch_coverage=1 00:07:54.369 --rc genhtml_function_coverage=1 00:07:54.369 --rc genhtml_legend=1 00:07:54.369 --rc geninfo_all_blocks=1 00:07:54.369 --rc geninfo_unexecuted_blocks=1 00:07:54.369 00:07:54.369 ' 00:07:54.369 10:19:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:54.369 10:19:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60696 00:07:54.369 10:19:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:54.369 10:19:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60696 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60696 ']' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.369 10:19:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.369 [2024-12-07 10:19:53.609966] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:54.369 [2024-12-07 10:19:53.610100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60696 ] 00:07:54.628 [2024-12-07 10:19:53.789660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.628 [2024-12-07 10:19:53.894877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.567 10:19:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.567 10:19:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:55.567 { 00:07:55.567 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:07:55.567 "fields": { 00:07:55.567 "major": 25, 00:07:55.567 "minor": 1, 00:07:55.567 "patch": 0, 00:07:55.567 "suffix": "-pre", 00:07:55.567 "commit": "a2f5e1c2d" 00:07:55.567 } 00:07:55.567 } 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:55.567 10:19:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.567 10:19:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:55.567 10:19:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.567 10:19:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.827 10:19:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:55.827 10:19:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:55.827 10:19:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:55.827 10:19:54 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.827 request: 00:07:55.827 { 00:07:55.827 "method": "env_dpdk_get_mem_stats", 00:07:55.827 "req_id": 1 00:07:55.827 } 00:07:55.827 Got JSON-RPC error response 00:07:55.827 response: 00:07:55.827 { 00:07:55.827 "code": -32601, 00:07:55.827 "message": "Method not found" 00:07:55.827 } 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.827 10:19:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60696 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60696 ']' 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60696 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.827 10:19:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60696 00:07:56.086 10:19:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.086 10:19:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.086 killing process with pid 60696 00:07:56.086 10:19:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60696' 00:07:56.086 10:19:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 60696 00:07:56.086 10:19:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 60696 00:07:58.623 00:07:58.623 real 0m4.222s 00:07:58.623 user 0m4.308s 00:07:58.623 sys 0m0.699s 00:07:58.623 10:19:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.623 10:19:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.623 ************************************ 00:07:58.623 END TEST app_cmdline 00:07:58.623 ************************************ 00:07:58.623 10:19:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.623 10:19:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.623 10:19:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.623 10:19:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.623 ************************************ 00:07:58.623 START TEST version 00:07:58.623 ************************************ 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.623 * Looking for test storage... 00:07:58.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.623 10:19:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.623 10:19:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.623 10:19:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.623 10:19:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.623 10:19:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.623 10:19:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.623 10:19:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.623 10:19:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.623 10:19:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.623 10:19:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.623 10:19:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.623 10:19:57 version -- scripts/common.sh@344 -- # case "$op" in 00:07:58.623 10:19:57 version -- scripts/common.sh@345 -- # : 1 00:07:58.623 10:19:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.623 10:19:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.623 10:19:57 version -- scripts/common.sh@365 -- # decimal 1 00:07:58.623 10:19:57 version -- scripts/common.sh@353 -- # local d=1 00:07:58.623 10:19:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.623 10:19:57 version -- scripts/common.sh@355 -- # echo 1 00:07:58.623 10:19:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.623 10:19:57 version -- scripts/common.sh@366 -- # decimal 2 00:07:58.623 10:19:57 version -- scripts/common.sh@353 -- # local d=2 00:07:58.623 10:19:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.623 10:19:57 version -- scripts/common.sh@355 -- # echo 2 00:07:58.623 10:19:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.623 10:19:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.623 10:19:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.623 10:19:57 version -- scripts/common.sh@368 -- # return 0 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.623 --rc genhtml_branch_coverage=1 00:07:58.623 --rc genhtml_function_coverage=1 00:07:58.623 --rc genhtml_legend=1 00:07:58.623 --rc geninfo_all_blocks=1 00:07:58.623 --rc geninfo_unexecuted_blocks=1 00:07:58.623 00:07:58.623 ' 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.623 --rc genhtml_branch_coverage=1 00:07:58.623 --rc genhtml_function_coverage=1 00:07:58.623 --rc genhtml_legend=1 00:07:58.623 --rc geninfo_all_blocks=1 00:07:58.623 --rc geninfo_unexecuted_blocks=1 00:07:58.623 00:07:58.623 ' 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.623 --rc genhtml_branch_coverage=1 00:07:58.623 --rc genhtml_function_coverage=1 00:07:58.623 --rc genhtml_legend=1 00:07:58.623 --rc geninfo_all_blocks=1 00:07:58.623 --rc geninfo_unexecuted_blocks=1 00:07:58.623 00:07:58.623 ' 00:07:58.623 10:19:57 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.624 --rc genhtml_branch_coverage=1 00:07:58.624 --rc genhtml_function_coverage=1 00:07:58.624 --rc genhtml_legend=1 00:07:58.624 --rc geninfo_all_blocks=1 00:07:58.624 --rc geninfo_unexecuted_blocks=1 00:07:58.624 00:07:58.624 ' 00:07:58.624 10:19:57 version -- app/version.sh@17 -- # get_header_version major 00:07:58.624 10:19:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # cut -f2 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.624 10:19:57 version -- app/version.sh@17 -- # major=25 00:07:58.624 10:19:57 version -- app/version.sh@18 -- # get_header_version minor 00:07:58.624 10:19:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # cut -f2 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.624 10:19:57 version -- app/version.sh@18 -- # minor=1 00:07:58.624 10:19:57 version -- app/version.sh@19 -- # get_header_version patch 00:07:58.624 10:19:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # cut -f2 00:07:58.624 10:19:57 version -- app/version.sh@19 -- # patch=0 00:07:58.624 10:19:57 version -- app/version.sh@20 -- # get_header_version suffix 00:07:58.624 10:19:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # cut -f2 00:07:58.624 10:19:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.624 10:19:57 version -- app/version.sh@20 -- # suffix=-pre 00:07:58.624 10:19:57 version -- app/version.sh@22 -- # version=25.1 00:07:58.624 10:19:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:58.624 10:19:57 version -- app/version.sh@28 -- # version=25.1rc0 00:07:58.624 10:19:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:58.624 10:19:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:58.624 10:19:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:58.624 10:19:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:58.624 00:07:58.624 real 0m0.327s 00:07:58.624 user 0m0.175s 00:07:58.624 sys 0m0.218s 00:07:58.624 10:19:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.624 10:19:57 version -- common/autotest_common.sh@10 -- # set +x 00:07:58.624 ************************************ 00:07:58.624 END TEST version 00:07:58.624 ************************************ 00:07:58.624 10:19:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:58.624 10:19:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:58.624 10:19:57 -- spdk/autotest.sh@194 -- # uname -s 00:07:58.624 10:19:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:58.624 10:19:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:58.624 10:19:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:58.624 10:19:57 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:58.624 10:19:57 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:58.624 10:19:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.624 10:19:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.624 10:19:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.883 ************************************ 00:07:58.883 START TEST blockdev_nvme 00:07:58.883 ************************************ 00:07:58.883 10:19:57 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:58.883 * Looking for test storage... 00:07:58.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.883 10:19:58 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.883 --rc genhtml_branch_coverage=1 00:07:58.883 --rc genhtml_function_coverage=1 00:07:58.883 --rc genhtml_legend=1 00:07:58.883 --rc geninfo_all_blocks=1 00:07:58.883 --rc geninfo_unexecuted_blocks=1 00:07:58.883 00:07:58.883 ' 00:07:58.883 10:19:58 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.884 --rc genhtml_branch_coverage=1 00:07:58.884 --rc genhtml_function_coverage=1 00:07:58.884 --rc genhtml_legend=1 00:07:58.884 --rc geninfo_all_blocks=1 00:07:58.884 --rc geninfo_unexecuted_blocks=1 00:07:58.884 00:07:58.884 ' 00:07:58.884 10:19:58 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.884 --rc genhtml_branch_coverage=1 00:07:58.884 --rc genhtml_function_coverage=1 00:07:58.884 --rc genhtml_legend=1 00:07:58.884 --rc geninfo_all_blocks=1 00:07:58.884 --rc geninfo_unexecuted_blocks=1 00:07:58.884 00:07:58.884 ' 00:07:58.884 10:19:58 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.884 --rc genhtml_branch_coverage=1 00:07:58.884 --rc genhtml_function_coverage=1 00:07:58.884 --rc genhtml_legend=1 00:07:58.884 --rc geninfo_all_blocks=1 00:07:58.884 --rc geninfo_unexecuted_blocks=1 00:07:58.884 00:07:58.884 ' 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:58.884 10:19:58 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:58.884 10:19:58 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60882 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:59.143 10:19:58 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60882 00:07:59.143 10:19:58 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60882 ']' 00:07:59.143 10:19:58 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.143 10:19:58 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.143 10:19:58 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.143 10:19:58 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.143 10:19:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.143 [2024-12-07 10:19:58.345121] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:59.143 [2024-12-07 10:19:58.345395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60882 ] 00:07:59.402 [2024-12-07 10:19:58.523959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.402 [2024-12-07 10:19:58.636214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.340 10:19:59 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.340 10:19:59 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:00.340 10:19:59 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:00.340 10:19:59 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:08:00.340 10:19:59 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:00.340 10:19:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:00.340 10:19:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.340 10:19:59 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:00.340 10:19:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.340 10:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.600 10:19:59 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.600 10:19:59 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:08:00.600 10:19:59 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.600 10:19:59 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.600 10:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.861 10:19:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.861 10:19:59 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:00.861 10:19:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.861 10:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.861 10:19:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.861 10:19:59 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:00.861 10:19:59 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:00.861 10:19:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.861 10:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.861 10:19:59 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fc73c5db-ed5c-48cd-a00c-d48f95c3adcb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fc73c5db-ed5c-48cd-a00c-d48f95c3adcb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1c72f9e3-eefa-4790-b6d9-e7844c0b51e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1c72f9e3-eefa-4790-b6d9-e7844c0b51e6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fbda9d41-515c-4c27-847b-6812e06201fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fbda9d41-515c-4c27-847b-6812e06201fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ab2ad2b0-8243-4d7b-ac4d-3e484b64cd45"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab2ad2b0-8243-4d7b-ac4d-3e484b64cd45",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "21b78f13-aa05-4b82-ad2a-8dbc538473e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "21b78f13-aa05-4b82-ad2a-8dbc538473e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "55d57347-6f35-42fb-bbee-9428c6ab43ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "55d57347-6f35-42fb-bbee-9428c6ab43ed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:00.861 10:20:00 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60882 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60882 ']' 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60882 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60882 00:08:00.861 killing process with pid 60882 00:08:00.861 10:20:00 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.862 10:20:00 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.862 10:20:00 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60882' 00:08:00.862 10:20:00 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60882 00:08:00.862 10:20:00 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60882 00:08:03.396 10:20:02 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:03.396 10:20:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:03.396 10:20:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:03.396 10:20:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.396 10:20:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.396 ************************************ 00:08:03.396 START TEST bdev_hello_world 00:08:03.396 ************************************ 00:08:03.396 10:20:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:03.396 [2024-12-07 10:20:02.563384] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:03.396 [2024-12-07 10:20:02.563517] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60977 ] 00:08:03.396 [2024-12-07 10:20:02.745071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.655 [2024-12-07 10:20:02.856050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.222 [2024-12-07 10:20:03.494542] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:04.222 [2024-12-07 10:20:03.494589] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:04.222 [2024-12-07 10:20:03.494625] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:04.222 [2024-12-07 10:20:03.497570] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:04.222 [2024-12-07 10:20:03.498200] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:04.222 [2024-12-07 10:20:03.498309] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:04.222 [2024-12-07 10:20:03.498615] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:04.222 00:08:04.222 [2024-12-07 10:20:03.498844] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:05.597 ************************************ 00:08:05.597 END TEST bdev_hello_world 00:08:05.597 ************************************ 00:08:05.597 00:08:05.597 real 0m2.111s 00:08:05.597 user 0m1.733s 00:08:05.597 sys 0m0.270s 00:08:05.597 10:20:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.597 10:20:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:05.597 10:20:04 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:05.597 10:20:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.597 10:20:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.597 10:20:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.597 ************************************ 00:08:05.597 START TEST bdev_bounds 00:08:05.597 ************************************ 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:05.597 Process bdevio pid: 61019 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61019 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61019' 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61019 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61019 ']' 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.597 10:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:05.597 [2024-12-07 10:20:04.755381] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:05.597 [2024-12-07 10:20:04.755679] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61019 ] 00:08:05.597 [2024-12-07 10:20:04.936919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.855 [2024-12-07 10:20:05.048692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.855 [2024-12-07 10:20:05.048849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.855 [2024-12-07 10:20:05.048877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.421 10:20:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.421 10:20:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:06.421 10:20:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:06.681 I/O targets: 00:08:06.681 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:06.681 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:06.681 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:06.681 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:06.681 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:06.681 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:06.681 00:08:06.681 00:08:06.681 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.681 http://cunit.sourceforge.net/ 00:08:06.681 00:08:06.681 00:08:06.681 Suite: bdevio tests on: Nvme3n1 00:08:06.681 Test: blockdev write read block ...passed 00:08:06.681 Test: blockdev write zeroes read block ...passed 00:08:06.681 Test: blockdev write zeroes read no split ...passed 00:08:06.681 Test: blockdev write zeroes read split ...passed 00:08:06.681 Test: blockdev write zeroes read split partial ...passed 00:08:06.681 Test: blockdev reset ...[2024-12-07 10:20:05.883296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:06.681 [2024-12-07 10:20:05.888037] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:06.681 Test: blockdev write read 8 blocks ...uccessful. 00:08:06.681 passed 00:08:06.681 Test: blockdev write read size > 128k ...passed 00:08:06.681 Test: blockdev write read invalid size ...passed 00:08:06.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:06.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:06.681 Test: blockdev write read max offset ...passed 00:08:06.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:06.681 Test: blockdev writev readv 8 blocks ...passed 00:08:06.681 Test: blockdev writev readv 30 x 1block ...passed 00:08:06.681 Test: blockdev writev readv block ...passed 00:08:06.681 Test: blockdev writev readv size > 128k ...passed 00:08:06.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:06.681 Test: blockdev comparev and writev ...[2024-12-07 10:20:05.898763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c120a000 len:0x1000 00:08:06.681 [2024-12-07 10:20:05.898821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:06.681 passed 00:08:06.681 Test: blockdev nvme passthru rw ...passed 00:08:06.681 Test: blockdev nvme passthru vendor specific ...passed 00:08:06.681 Test: blockdev nvme admin passthru ...[2024-12-07 10:20:05.900063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:06.681 [2024-12-07 10:20:05.900108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:06.681 passed 00:08:06.681 Test: blockdev copy ...passed 00:08:06.681 Suite: bdevio tests on: Nvme2n3 00:08:06.681 Test: blockdev write read block ...passed 00:08:06.681 Test: blockdev write zeroes read block ...passed 00:08:06.681 Test: blockdev write zeroes read no split ...passed 00:08:06.681 Test: blockdev write zeroes read split ...passed 00:08:06.681 Test: blockdev write zeroes read split partial ...passed 00:08:06.681 Test: blockdev reset ...[2024-12-07 10:20:05.977006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:06.681 passed 00:08:06.681 Test: blockdev write read 8 blocks ...[2024-12-07 10:20:05.981919] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:06.681 passed 00:08:06.681 Test: blockdev write read size > 128k ...passed 00:08:06.681 Test: blockdev write read invalid size ...passed 00:08:06.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:06.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:06.681 Test: blockdev write read max offset ...passed 00:08:06.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:06.681 Test: blockdev writev readv 8 blocks ...passed 00:08:06.681 Test: blockdev writev readv 30 x 1block ...passed 00:08:06.681 Test: blockdev writev readv block ...passed 00:08:06.681 Test: blockdev writev readv size > 128k ...passed 00:08:06.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:06.681 Test: blockdev comparev and writev ...[2024-12-07 10:20:05.991813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a3c06000 len:0x1000 00:08:06.681 [2024-12-07 10:20:05.991859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:06.681 passed 00:08:06.681 Test: blockdev nvme passthru rw ...passed 00:08:06.681 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:05.992767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:06.681 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:06.681 [2024-12-07 10:20:05.992907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:06.681 passed 00:08:06.681 Test: blockdev copy ...passed 00:08:06.681 Suite: bdevio tests on: Nvme2n2 00:08:06.681 Test: blockdev write read block ...passed 00:08:06.681 Test: blockdev write zeroes read block ...passed 00:08:06.681 Test: blockdev write zeroes read no split ...passed 00:08:06.940 Test: blockdev write zeroes read split ...passed 00:08:06.940 Test: blockdev write zeroes read split partial ...passed 00:08:06.940 Test: blockdev reset ...[2024-12-07 10:20:06.068339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:06.940 [2024-12-07 10:20:06.073226] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:06.940 Test: blockdev write read 8 blocks ...uccessful. 00:08:06.940 passed 00:08:06.940 Test: blockdev write read size > 128k ...passed 00:08:06.940 Test: blockdev write read invalid size ...passed 00:08:06.940 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:06.940 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:06.940 Test: blockdev write read max offset ...passed 00:08:06.940 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:06.940 Test: blockdev writev readv 8 blocks ...passed 00:08:06.940 Test: blockdev writev readv 30 x 1block ...passed 00:08:06.940 Test: blockdev writev readv block ...passed 00:08:06.940 Test: blockdev writev readv size > 128k ...passed 00:08:06.940 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:06.940 Test: blockdev comparev and writev ...[2024-12-07 10:20:06.084214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d123c000 len:0x1000 00:08:06.940 [2024-12-07 10:20:06.084400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:06.940 passed 00:08:06.940 Test: blockdev nvme passthru rw ...passed 00:08:06.940 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:06.085995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:06.940 [2024-12-07 10:20:06.086091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:06.940 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:06.940 passed 00:08:06.940 Test: blockdev copy ...passed 00:08:06.940 Suite: bdevio tests on: Nvme2n1 00:08:06.940 Test: blockdev write read block ...passed 00:08:06.940 Test: blockdev write zeroes read block ...passed 00:08:06.940 Test: blockdev write zeroes read no split ...passed 00:08:06.940 Test: blockdev write zeroes read split ...passed 00:08:06.940 Test: blockdev write zeroes read split partial ...passed 00:08:06.940 Test: blockdev reset ...[2024-12-07 10:20:06.163432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:06.940 [2024-12-07 10:20:06.168447] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:06.940 00:08:06.940 Test: blockdev write read 8 blocks ...passed 00:08:06.940 Test: blockdev write read size > 128k ...passed 00:08:06.940 Test: blockdev write read invalid size ...passed 00:08:06.940 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:06.940 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:06.940 Test: blockdev write read max offset ...passed 00:08:06.940 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:06.940 Test: blockdev writev readv 8 blocks ...passed 00:08:06.940 Test: blockdev writev readv 30 x 1block ...passed 00:08:06.940 Test: blockdev writev readv block ...passed 00:08:06.940 Test: blockdev writev readv size > 128k ...passed 00:08:06.940 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:06.940 Test: blockdev comparev and writev ...[2024-12-07 10:20:06.178945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1238000 len:0x1000 00:08:06.940 [2024-12-07 10:20:06.179153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:06.940 passed 00:08:06.940 Test: blockdev nvme passthru rw ...passed 00:08:06.940 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:06.180733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:06.940 [2024-12-07 10:20:06.180830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:06.940 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:06.940 passed 00:08:06.940 Test: blockdev copy ...passed 00:08:06.940 Suite: bdevio tests on: Nvme1n1 00:08:06.940 Test: blockdev write read block ...passed 00:08:06.940 Test: blockdev write zeroes read block ...passed 00:08:06.940 Test: blockdev write zeroes read no split ...passed 00:08:06.940 Test: blockdev write zeroes read split ...passed 00:08:06.940 Test: blockdev write zeroes read split partial ...passed 00:08:06.940 Test: blockdev reset ...[2024-12-07 10:20:06.254593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:06.940 passed 00:08:06.940 Test: blockdev write read 8 blocks ...[2024-12-07 10:20:06.259147] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:06.940 passed 00:08:06.940 Test: blockdev write read size > 128k ...passed 00:08:06.940 Test: blockdev write read invalid size ...passed 00:08:06.940 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:06.940 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:06.940 Test: blockdev write read max offset ...passed 00:08:06.940 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:06.940 Test: blockdev writev readv 8 blocks ...passed 00:08:06.940 Test: blockdev writev readv 30 x 1block ...passed 00:08:06.940 Test: blockdev writev readv block ...passed 00:08:06.940 Test: blockdev writev readv size > 128k ...passed 00:08:06.940 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:06.940 Test: blockdev comparev and writev ...[2024-12-07 10:20:06.268059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1234000 len:0x1000 00:08:06.941 [2024-12-07 10:20:06.268107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:06.941 passed 00:08:06.941 Test: blockdev nvme passthru rw ...passed 00:08:06.941 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:06.269147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:06.941 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:06.941 [2024-12-07 10:20:06.269285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:06.941 passed 00:08:06.941 Test: blockdev copy ...passed 00:08:06.941 Suite: bdevio tests on: Nvme0n1 00:08:06.941 Test: blockdev write read block ...passed 00:08:06.941 Test: blockdev write zeroes read block ...passed 00:08:06.941 Test: blockdev write zeroes read no split ...passed 00:08:07.199 Test: blockdev write zeroes read split ...passed 00:08:07.199 Test: blockdev write zeroes read split partial ...passed 00:08:07.199 Test: blockdev reset ...[2024-12-07 10:20:06.346771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:07.199 [2024-12-07 10:20:06.351461] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:07.199 Test: blockdev write read 8 blocks ...uccessful. 00:08:07.199 passed 00:08:07.199 Test: blockdev write read size > 128k ...passed 00:08:07.199 Test: blockdev write read invalid size ...passed 00:08:07.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:07.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:07.199 Test: blockdev write read max offset ...passed 00:08:07.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:07.199 Test: blockdev writev readv 8 blocks ...passed 00:08:07.199 Test: blockdev writev readv 30 x 1block ...passed 00:08:07.199 Test: blockdev writev readv block ...passed 00:08:07.199 Test: blockdev writev readv size > 128k ...passed 00:08:07.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:07.199 Test: blockdev comparev and writev ...passed 00:08:07.199 Test: blockdev nvme passthru rw ...[2024-12-07 10:20:06.361042] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:07.199 separate metadata which is not supported yet. 00:08:07.199 passed 00:08:07.199 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:06.361880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:07.199 [2024-12-07 10:20:06.362119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed sqhd:0017 p:1 m:0 dnr:1 00:08:07.199 00:08:07.199 Test: blockdev nvme admin passthru ...passed 00:08:07.199 Test: blockdev copy ...passed 00:08:07.199 00:08:07.199 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.199 suites 6 6 n/a 0 0 00:08:07.199 tests 138 138 138 0 0 00:08:07.199 asserts 893 893 893 0 n/a 00:08:07.199 00:08:07.199 Elapsed time = 1.483 seconds 00:08:07.199 0 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61019 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61019 ']' 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61019 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61019 00:08:07.199 killing process with pid 61019 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61019' 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61019 00:08:07.199 10:20:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61019 00:08:08.136 10:20:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:08.136 00:08:08.136 real 0m2.798s 00:08:08.136 user 0m7.114s 00:08:08.136 sys 0m0.410s 00:08:08.136 10:20:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.136 ************************************ 00:08:08.136 END TEST bdev_bounds 00:08:08.136 ************************************ 00:08:08.136 10:20:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:08.394 10:20:07 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:08.394 10:20:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.394 10:20:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.394 10:20:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.394 ************************************ 00:08:08.394 START TEST bdev_nbd 00:08:08.394 ************************************ 00:08:08.394 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:08.394 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:08.394 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:08.394 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.394 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:08.394 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61084 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61084 /var/tmp/spdk-nbd.sock 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61084 ']' 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.395 10:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:08.395 [2024-12-07 10:20:07.646438] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:08.395 [2024-12-07 10:20:07.646580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.653 [2024-12-07 10:20:07.831025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.653 [2024-12-07 10:20:07.936451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.590 1+0 records in 00:08:09.590 1+0 records out 00:08:09.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588234 s, 7.0 MB/s 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:09.590 10:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.848 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.848 1+0 records in 00:08:09.849 1+0 records out 00:08:09.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072706 s, 5.6 MB/s 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:09.849 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.107 1+0 records in 00:08:10.107 1+0 records out 00:08:10.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593222 s, 6.9 MB/s 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:10.107 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:10.367 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:10.367 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.368 1+0 records in 00:08:10.368 1+0 records out 00:08:10.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811546 s, 5.0 MB/s 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:10.368 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.627 1+0 records in 00:08:10.627 1+0 records out 00:08:10.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078785 s, 5.2 MB/s 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:10.627 10:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.887 1+0 records in 00:08:10.887 1+0 records out 00:08:10.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817977 s, 5.0 MB/s 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:10.887 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd0", 00:08:11.146 "bdev_name": "Nvme0n1" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd1", 00:08:11.146 "bdev_name": "Nvme1n1" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd2", 00:08:11.146 "bdev_name": "Nvme2n1" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd3", 00:08:11.146 "bdev_name": "Nvme2n2" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd4", 00:08:11.146 "bdev_name": "Nvme2n3" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd5", 00:08:11.146 "bdev_name": "Nvme3n1" 00:08:11.146 } 00:08:11.146 ]' 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd0", 00:08:11.146 "bdev_name": "Nvme0n1" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd1", 00:08:11.146 "bdev_name": "Nvme1n1" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd2", 00:08:11.146 "bdev_name": "Nvme2n1" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd3", 00:08:11.146 "bdev_name": "Nvme2n2" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd4", 00:08:11.146 "bdev_name": "Nvme2n3" 00:08:11.146 }, 00:08:11.146 { 00:08:11.146 "nbd_device": "/dev/nbd5", 00:08:11.146 "bdev_name": "Nvme3n1" 00:08:11.146 } 00:08:11.146 ]' 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.146 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.405 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.406 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.406 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.665 10:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.923 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.181 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.439 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.697 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:12.697 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:12.697 10:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:12.697 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:12.698 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:12.956 /dev/nbd0 00:08:12.956 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:12.956 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:12.956 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:12.957 1+0 records in 00:08:12.957 1+0 records out 00:08:12.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539382 s, 7.6 MB/s 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:12.957 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:13.215 /dev/nbd1 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.215 1+0 records in 00:08:13.215 1+0 records out 00:08:13.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786898 s, 5.2 MB/s 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:13.215 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:13.474 /dev/nbd10 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.474 1+0 records in 00:08:13.474 1+0 records out 00:08:13.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647798 s, 6.3 MB/s 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:13.474 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:13.733 /dev/nbd11 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.733 10:20:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.733 1+0 records in 00:08:13.733 1+0 records out 00:08:13.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903096 s, 4.5 MB/s 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:13.733 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:13.992 /dev/nbd12 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.992 1+0 records in 00:08:13.992 1+0 records out 00:08:13.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836958 s, 4.9 MB/s 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:13.992 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:14.251 /dev/nbd13 00:08:14.251 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:14.251 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:14.251 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.252 1+0 records in 00:08:14.252 1+0 records out 00:08:14.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777543 s, 5.3 MB/s 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.252 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.511 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd0", 00:08:14.511 "bdev_name": "Nvme0n1" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd1", 00:08:14.511 "bdev_name": "Nvme1n1" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd10", 00:08:14.511 "bdev_name": "Nvme2n1" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd11", 00:08:14.511 "bdev_name": "Nvme2n2" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd12", 00:08:14.511 "bdev_name": "Nvme2n3" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd13", 00:08:14.511 "bdev_name": "Nvme3n1" 00:08:14.511 } 00:08:14.511 ]' 00:08:14.511 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd0", 00:08:14.511 "bdev_name": "Nvme0n1" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd1", 00:08:14.511 "bdev_name": "Nvme1n1" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd10", 00:08:14.511 "bdev_name": "Nvme2n1" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd11", 00:08:14.511 "bdev_name": "Nvme2n2" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd12", 00:08:14.511 "bdev_name": "Nvme2n3" 00:08:14.511 }, 00:08:14.511 { 00:08:14.511 "nbd_device": "/dev/nbd13", 00:08:14.511 "bdev_name": "Nvme3n1" 00:08:14.511 } 00:08:14.511 ]' 00:08:14.511 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.511 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:14.511 /dev/nbd1 00:08:14.511 /dev/nbd10 00:08:14.511 /dev/nbd11 00:08:14.511 /dev/nbd12 00:08:14.511 /dev/nbd13' 00:08:14.511 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:14.511 /dev/nbd1 00:08:14.511 /dev/nbd10 00:08:14.512 /dev/nbd11 00:08:14.512 /dev/nbd12 00:08:14.512 /dev/nbd13' 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:14.512 256+0 records in 00:08:14.512 256+0 records out 00:08:14.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111208 s, 94.3 MB/s 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.512 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:14.771 256+0 records in 00:08:14.771 256+0 records out 00:08:14.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131204 s, 8.0 MB/s 00:08:14.771 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.771 10:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:14.771 256+0 records in 00:08:14.771 256+0 records out 00:08:14.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134582 s, 7.8 MB/s 00:08:14.771 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.771 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:15.029 256+0 records in 00:08:15.029 256+0 records out 00:08:15.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131077 s, 8.0 MB/s 00:08:15.029 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.029 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:15.029 256+0 records in 00:08:15.029 256+0 records out 00:08:15.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134129 s, 7.8 MB/s 00:08:15.029 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.029 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:15.288 256+0 records in 00:08:15.288 256+0 records out 00:08:15.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131937 s, 7.9 MB/s 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:15.288 256+0 records in 00:08:15.288 256+0 records out 00:08:15.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133834 s, 7.8 MB/s 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.288 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.546 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.547 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.806 10:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.806 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.064 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:16.321 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.322 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.579 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.580 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.838 10:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:17.097 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:17.356 malloc_lvol_verify 00:08:17.356 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:17.356 8fd67413-82ee-46fc-99c1-f8f7ee1f4a8a 00:08:17.615 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:17.615 de60ac86-2012-4e3a-9dde-b9672cc181a7 00:08:17.615 10:20:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:17.874 /dev/nbd0 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:17.874 mke2fs 1.47.0 (5-Feb-2023) 00:08:17.874 Discarding device blocks: 0/4096 done 00:08:17.874 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:17.874 00:08:17.874 Allocating group tables: 0/1 done 00:08:17.874 Writing inode tables: 0/1 done 00:08:17.874 Creating journal (1024 blocks): done 00:08:17.874 Writing superblocks and filesystem accounting information: 0/1 done 00:08:17.874 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.874 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61084 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61084 ']' 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61084 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61084 00:08:18.134 killing process with pid 61084 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61084' 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61084 00:08:18.134 10:20:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61084 00:08:19.511 10:20:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:19.511 00:08:19.511 real 0m11.045s 00:08:19.511 user 0m14.147s 00:08:19.511 sys 0m4.635s 00:08:19.511 10:20:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.511 ************************************ 00:08:19.511 END TEST bdev_nbd 00:08:19.511 ************************************ 00:08:19.511 10:20:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:19.511 10:20:18 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:19.511 10:20:18 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:08:19.511 skipping fio tests on NVMe due to multi-ns failures. 00:08:19.511 10:20:18 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:19.511 10:20:18 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:19.511 10:20:18 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:19.511 10:20:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:19.511 10:20:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.511 10:20:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.511 ************************************ 00:08:19.511 START TEST bdev_verify 00:08:19.511 ************************************ 00:08:19.511 10:20:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:19.511 [2024-12-07 10:20:18.761005] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:19.511 [2024-12-07 10:20:18.761121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61469 ] 00:08:19.770 [2024-12-07 10:20:18.940064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.770 [2024-12-07 10:20:19.055469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.770 [2024-12-07 10:20:19.055485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.736 Running I/O for 5 seconds... 00:08:22.610 21056.00 IOPS, 82.25 MiB/s [2024-12-07T10:20:23.337Z] 22144.00 IOPS, 86.50 MiB/s [2024-12-07T10:20:24.272Z] 23040.00 IOPS, 90.00 MiB/s [2024-12-07T10:20:25.205Z] 22480.00 IOPS, 87.81 MiB/s [2024-12-07T10:20:25.205Z] 21440.00 IOPS, 83.75 MiB/s 00:08:25.852 Latency(us) 00:08:25.852 [2024-12-07T10:20:25.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.852 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0x0 length 0xbd0bd 00:08:25.852 Nvme0n1 : 5.04 1801.42 7.04 0.00 0.00 70841.93 14212.63 74116.22 00:08:25.852 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:25.852 Nvme0n1 : 5.06 1718.80 6.71 0.00 0.00 74283.92 14633.74 79169.59 00:08:25.852 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0x0 length 0xa0000 00:08:25.852 Nvme1n1 : 5.05 1800.88 7.03 0.00 0.00 70750.79 16107.64 68220.61 00:08:25.852 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0xa0000 length 0xa0000 00:08:25.852 Nvme1n1 : 5.07 1718.42 6.71 0.00 0.00 74198.65 16949.87 73695.10 00:08:25.852 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0x0 length 0x80000 00:08:25.852 Nvme2n1 : 5.07 1806.22 7.06 0.00 0.00 70247.61 7580.07 59377.20 00:08:25.852 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0x80000 length 0x80000 00:08:25.852 Nvme2n1 : 5.07 1718.03 6.71 0.00 0.00 73948.78 18002.66 75379.56 00:08:25.852 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0x0 length 0x80000 00:08:25.852 Nvme2n2 : 5.08 1814.27 7.09 0.00 0.00 69921.29 10159.40 60640.54 00:08:25.852 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:25.852 Verification LBA range: start 0x80000 length 0x80000 00:08:25.853 Nvme2n2 : 5.07 1717.65 6.71 0.00 0.00 73867.89 18739.61 76642.90 00:08:25.853 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:25.853 Verification LBA range: start 0x0 length 0x80000 00:08:25.853 Nvme2n3 : 5.08 1813.91 7.09 0.00 0.00 69818.20 9896.20 59798.31 00:08:25.853 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:25.853 Verification LBA range: start 0x80000 length 0x80000 00:08:25.853 Nvme2n3 : 5.07 1716.87 6.71 0.00 0.00 73788.84 18002.66 76642.90 00:08:25.853 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:25.853 Verification LBA range: start 0x0 length 0x20000 00:08:25.853 Nvme3n1 : 5.08 1813.54 7.08 0.00 0.00 69735.45 9422.44 58956.08 00:08:25.853 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:25.853 Verification LBA range: start 0x20000 length 0x20000 00:08:25.853 Nvme3n1 : 5.07 1716.35 6.70 0.00 0.00 73699.30 15791.81 79590.71 00:08:25.853 [2024-12-07T10:20:25.206Z] =================================================================================================================== 00:08:25.853 [2024-12-07T10:20:25.206Z] Total : 21156.35 82.64 0.00 0.00 72042.42 7580.07 79590.71 00:08:27.226 ************************************ 00:08:27.226 END TEST bdev_verify 00:08:27.226 ************************************ 00:08:27.226 00:08:27.226 real 0m7.700s 00:08:27.226 user 0m14.198s 00:08:27.226 sys 0m0.343s 00:08:27.226 10:20:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.226 10:20:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:27.226 10:20:26 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.226 10:20:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:27.226 10:20:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.226 10:20:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.226 ************************************ 00:08:27.226 START TEST bdev_verify_big_io 00:08:27.226 ************************************ 00:08:27.226 10:20:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.226 [2024-12-07 10:20:26.547086] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:27.226 [2024-12-07 10:20:26.547219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61573 ] 00:08:27.486 [2024-12-07 10:20:26.728910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.746 [2024-12-07 10:20:26.874070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.746 [2024-12-07 10:20:26.874096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.686 Running I/O for 5 seconds... 00:08:33.130 1827.00 IOPS, 114.19 MiB/s [2024-12-07T10:20:33.420Z] 3053.00 IOPS, 190.81 MiB/s [2024-12-07T10:20:33.989Z] 3537.33 IOPS, 221.08 MiB/s 00:08:34.636 Latency(us) 00:08:34.636 [2024-12-07T10:20:33.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.636 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x0 length 0xbd0b 00:08:34.636 Nvme0n1 : 5.45 234.87 14.68 0.00 0.00 537655.11 25793.29 569347.29 00:08:34.636 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:34.636 Nvme0n1 : 5.59 114.53 7.16 0.00 0.00 1081884.17 11949.13 1266713.50 00:08:34.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x0 length 0xa000 00:08:34.636 Nvme1n1 : 5.45 231.50 14.47 0.00 0.00 533026.13 33057.52 491862.16 00:08:34.636 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0xa000 length 0xa000 00:08:34.636 Nvme1n1 : 5.59 114.48 7.16 0.00 0.00 1021090.20 37268.67 956772.96 00:08:34.636 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x0 length 0x8000 00:08:34.636 Nvme2n1 : 5.49 233.45 14.59 0.00 0.00 518698.53 37479.22 528920.26 00:08:34.636 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x8000 length 0x8000 00:08:34.636 Nvme2n1 : 5.72 120.95 7.56 0.00 0.00 934501.19 30741.38 1873118.89 00:08:34.636 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x0 length 0x8000 00:08:34.636 Nvme2n2 : 5.49 233.21 14.58 0.00 0.00 510450.59 37479.22 565978.37 00:08:34.636 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x8000 length 0x8000 00:08:34.636 Nvme2n2 : 5.81 141.05 8.82 0.00 0.00 773766.78 21371.58 1927021.60 00:08:34.636 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x0 length 0x8000 00:08:34.636 Nvme2n3 : 5.49 236.78 14.80 0.00 0.00 494988.93 33057.52 569347.29 00:08:34.636 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x8000 length 0x8000 00:08:34.636 Nvme2n3 : 5.97 189.80 11.86 0.00 0.00 555382.51 8001.18 1967448.62 00:08:34.636 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x0 length 0x2000 00:08:34.636 Nvme3n1 : 5.53 254.64 15.92 0.00 0.00 454381.25 615.22 559240.53 00:08:34.636 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.636 Verification LBA range: start 0x2000 length 0x2000 00:08:34.636 Nvme3n1 : 6.15 295.81 18.49 0.00 0.00 349804.15 579.03 1435159.44 00:08:34.636 [2024-12-07T10:20:33.989Z] =================================================================================================================== 00:08:34.636 [2024-12-07T10:20:33.989Z] Total : 2401.08 150.07 0.00 0.00 579442.36 579.03 1967448.62 00:08:37.177 00:08:37.177 real 0m9.476s 00:08:37.177 user 0m17.610s 00:08:37.177 sys 0m0.435s 00:08:37.177 ************************************ 00:08:37.177 END TEST bdev_verify_big_io 00:08:37.177 ************************************ 00:08:37.177 10:20:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.177 10:20:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:37.177 10:20:35 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.177 10:20:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:37.177 10:20:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.177 10:20:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.177 ************************************ 00:08:37.177 START TEST bdev_write_zeroes 00:08:37.177 ************************************ 00:08:37.177 10:20:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.177 [2024-12-07 10:20:36.108876] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:37.177 [2024-12-07 10:20:36.109047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61693 ] 00:08:37.177 [2024-12-07 10:20:36.295383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.177 [2024-12-07 10:20:36.429174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.113 Running I/O for 1 seconds... 00:08:39.049 72923.00 IOPS, 284.86 MiB/s 00:08:39.049 Latency(us) 00:08:39.049 [2024-12-07T10:20:38.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.049 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.049 Nvme0n1 : 1.02 12073.02 47.16 0.00 0.00 10579.72 8527.58 32215.29 00:08:39.049 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.049 Nvme1n1 : 1.02 12096.24 47.25 0.00 0.00 10545.86 8843.41 32425.84 00:08:39.049 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.049 Nvme2n1 : 1.02 12084.18 47.20 0.00 0.00 10512.57 8474.94 29688.60 00:08:39.049 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.049 Nvme2n2 : 1.02 12073.24 47.16 0.00 0.00 10474.77 8474.94 29267.48 00:08:39.049 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.049 Nvme2n3 : 1.02 12062.99 47.12 0.00 0.00 10433.15 8580.22 24740.50 00:08:39.049 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.049 Nvme3n1 : 1.03 12111.35 47.31 0.00 0.00 10380.93 5290.26 22108.53 00:08:39.049 [2024-12-07T10:20:38.402Z] =================================================================================================================== 00:08:39.049 [2024-12-07T10:20:38.402Z] Total : 72501.02 283.21 0.00 0.00 10487.69 5290.26 32425.84 00:08:40.425 00:08:40.425 real 0m3.432s 00:08:40.425 user 0m2.971s 00:08:40.425 sys 0m0.347s 00:08:40.425 10:20:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.425 ************************************ 00:08:40.425 END TEST bdev_write_zeroes 00:08:40.425 ************************************ 00:08:40.425 10:20:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:40.425 10:20:39 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.425 10:20:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:40.425 10:20:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.425 10:20:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.425 ************************************ 00:08:40.425 START TEST bdev_json_nonenclosed 00:08:40.425 ************************************ 00:08:40.425 10:20:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.425 [2024-12-07 10:20:39.620111] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:40.425 [2024-12-07 10:20:39.620232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61751 ] 00:08:40.684 [2024-12-07 10:20:39.804996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.684 [2024-12-07 10:20:39.942836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.684 [2024-12-07 10:20:39.942929] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:40.684 [2024-12-07 10:20:39.942953] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:40.684 [2024-12-07 10:20:39.942966] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.943 ************************************ 00:08:40.943 END TEST bdev_json_nonenclosed 00:08:40.943 ************************************ 00:08:40.943 00:08:40.943 real 0m0.694s 00:08:40.943 user 0m0.414s 00:08:40.943 sys 0m0.175s 00:08:40.943 10:20:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.943 10:20:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:40.944 10:20:40 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.944 10:20:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:40.944 10:20:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.944 10:20:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.944 ************************************ 00:08:40.944 START TEST bdev_json_nonarray 00:08:40.944 ************************************ 00:08:40.944 10:20:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.203 [2024-12-07 10:20:40.385721] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:41.203 [2024-12-07 10:20:40.385856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61777 ] 00:08:41.462 [2024-12-07 10:20:40.569364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.462 [2024-12-07 10:20:40.698718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.462 [2024-12-07 10:20:40.698828] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:41.462 [2024-12-07 10:20:40.698852] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:41.462 [2024-12-07 10:20:40.698865] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.720 00:08:41.721 real 0m0.683s 00:08:41.721 user 0m0.411s 00:08:41.721 sys 0m0.167s 00:08:41.721 10:20:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.721 ************************************ 00:08:41.721 END TEST bdev_json_nonarray 00:08:41.721 ************************************ 00:08:41.721 10:20:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:41.721 10:20:41 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:41.721 00:08:41.721 real 0m43.063s 00:08:41.721 user 1m3.137s 00:08:41.721 sys 0m8.066s 00:08:41.721 10:20:41 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.721 ************************************ 00:08:41.721 END TEST blockdev_nvme 00:08:41.721 ************************************ 00:08:41.721 10:20:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.980 10:20:41 -- spdk/autotest.sh@209 -- # uname -s 00:08:41.980 10:20:41 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:41.980 10:20:41 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:41.980 10:20:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.980 10:20:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.980 10:20:41 -- common/autotest_common.sh@10 -- # set +x 00:08:41.980 ************************************ 00:08:41.980 START TEST blockdev_nvme_gpt 00:08:41.980 ************************************ 00:08:41.980 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:41.980 * Looking for test storage... 00:08:41.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:41.980 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.980 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.980 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.980 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.980 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.239 10:20:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:42.239 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.239 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.239 --rc genhtml_branch_coverage=1 00:08:42.239 --rc genhtml_function_coverage=1 00:08:42.239 --rc genhtml_legend=1 00:08:42.239 --rc geninfo_all_blocks=1 00:08:42.239 --rc geninfo_unexecuted_blocks=1 00:08:42.239 00:08:42.239 ' 00:08:42.239 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.239 --rc genhtml_branch_coverage=1 00:08:42.239 --rc genhtml_function_coverage=1 00:08:42.239 --rc genhtml_legend=1 00:08:42.239 --rc geninfo_all_blocks=1 00:08:42.239 --rc geninfo_unexecuted_blocks=1 00:08:42.239 00:08:42.239 ' 00:08:42.239 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.239 --rc genhtml_branch_coverage=1 00:08:42.239 --rc genhtml_function_coverage=1 00:08:42.239 --rc genhtml_legend=1 00:08:42.239 --rc geninfo_all_blocks=1 00:08:42.239 --rc geninfo_unexecuted_blocks=1 00:08:42.239 00:08:42.239 ' 00:08:42.239 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.239 --rc genhtml_branch_coverage=1 00:08:42.239 --rc genhtml_function_coverage=1 00:08:42.239 --rc genhtml_legend=1 00:08:42.239 --rc geninfo_all_blocks=1 00:08:42.239 --rc geninfo_unexecuted_blocks=1 00:08:42.239 00:08:42.239 ' 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:42.239 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61861 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61861 00:08:42.240 10:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:42.240 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61861 ']' 00:08:42.240 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.240 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.240 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.240 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.240 10:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:42.240 [2024-12-07 10:20:41.479289] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:42.240 [2024-12-07 10:20:41.479446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61861 ] 00:08:42.499 [2024-12-07 10:20:41.664899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.499 [2024-12-07 10:20:41.798401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.873 10:20:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.873 10:20:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:43.873 10:20:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:43.873 10:20:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:08:43.873 10:20:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:44.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:44.390 Waiting for block devices as requested 00:08:44.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.649 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.649 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.908 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:50.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:50.188 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:50.188 BYT; 00:08:50.188 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:50.189 BYT; 00:08:50.189 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:50.189 10:20:49 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:50.189 10:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:51.129 The operation has completed successfully. 00:08:51.129 10:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:52.070 The operation has completed successfully. 00:08:52.070 10:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:53.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.576 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.576 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.576 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.576 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.835 10:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:53.835 10:20:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.835 10:20:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.835 [] 00:08:53.835 10:20:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.835 10:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:53.835 10:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:53.835 10:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:53.835 10:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.835 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:53.835 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.835 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.095 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.095 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:08:54.095 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.095 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.095 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.355 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.355 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:54.355 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:54.355 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.355 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.355 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:54.356 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1d7106a2-7dec-4827-97ac-8271fda0c6ab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1d7106a2-7dec-4827-97ac-8271fda0c6ab",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cbe58ee2-1e9b-4b62-b073-294614a799df"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cbe58ee2-1e9b-4b62-b073-294614a799df",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "96054200-69e4-4638-b03d-90a01d9ad6d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "96054200-69e4-4638-b03d-90a01d9ad6d6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a055fda3-984b-4164-aca0-167e1493e501"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a055fda3-984b-4164-aca0-167e1493e501",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7c6bbf98-a0b7-4e92-baf6-4d57c7ad0cb0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7c6bbf98-a0b7-4e92-baf6-4d57c7ad0cb0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:54.356 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:54.356 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:54.356 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:54.356 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:54.356 10:20:53 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61861 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61861 ']' 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61861 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61861 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.356 killing process with pid 61861 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61861' 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61861 00:08:54.356 10:20:53 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61861 00:08:56.965 10:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:56.965 10:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:56.965 10:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:56.965 10:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.965 10:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:56.965 ************************************ 00:08:56.965 START TEST bdev_hello_world 00:08:56.965 ************************************ 00:08:56.965 10:20:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:56.965 [2024-12-07 10:20:56.263714] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:56.965 [2024-12-07 10:20:56.263828] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62508 ] 00:08:57.224 [2024-12-07 10:20:56.443746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.484 [2024-12-07 10:20:56.585217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.053 [2024-12-07 10:20:57.294440] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:58.053 [2024-12-07 10:20:57.294495] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:58.053 [2024-12-07 10:20:57.294520] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:58.053 [2024-12-07 10:20:57.297704] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:58.053 [2024-12-07 10:20:57.298498] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:58.053 [2024-12-07 10:20:57.298535] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:58.053 [2024-12-07 10:20:57.298786] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:58.053 00:08:58.053 [2024-12-07 10:20:57.298815] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:59.442 00:08:59.442 real 0m2.298s 00:08:59.442 user 0m1.848s 00:08:59.442 sys 0m0.342s 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.442 ************************************ 00:08:59.442 END TEST bdev_hello_world 00:08:59.442 ************************************ 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:59.442 10:20:58 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:59.442 10:20:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.442 10:20:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.442 10:20:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.442 ************************************ 00:08:59.442 START TEST bdev_bounds 00:08:59.442 ************************************ 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62558 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:59.442 Process bdevio pid: 62558 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62558' 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62558 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62558 ']' 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.442 10:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:59.442 [2024-12-07 10:20:58.647118] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:59.442 [2024-12-07 10:20:58.647261] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62558 ] 00:08:59.701 [2024-12-07 10:20:58.831442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.701 [2024-12-07 10:20:58.973232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.701 [2024-12-07 10:20:58.973446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.701 [2024-12-07 10:20:58.973459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.640 10:20:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.640 10:20:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:00.640 10:20:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:00.640 I/O targets: 00:09:00.640 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:00.640 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:00.640 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:00.640 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.640 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.640 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.640 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:00.640 00:09:00.640 00:09:00.640 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.640 http://cunit.sourceforge.net/ 00:09:00.640 00:09:00.640 00:09:00.640 Suite: bdevio tests on: Nvme3n1 00:09:00.640 Test: blockdev write read block ...passed 00:09:00.640 Test: blockdev write zeroes read block ...passed 00:09:00.640 Test: blockdev write zeroes read no split ...passed 00:09:00.640 Test: blockdev write zeroes read split ...passed 00:09:00.640 Test: blockdev write zeroes read split partial ...passed 00:09:00.640 Test: blockdev reset ...[2024-12-07 10:20:59.862753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:00.640 [2024-12-07 10:20:59.866795] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:00.640 passed 00:09:00.640 Test: blockdev write read 8 blocks ...passed 00:09:00.640 Test: blockdev write read size > 128k ...passed 00:09:00.640 Test: blockdev write read invalid size ...passed 00:09:00.640 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.640 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.640 Test: blockdev write read max offset ...passed 00:09:00.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.640 Test: blockdev writev readv 8 blocks ...passed 00:09:00.640 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.640 Test: blockdev writev readv block ...passed 00:09:00.640 Test: blockdev writev readv size > 128k ...passed 00:09:00.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.640 Test: blockdev comparev and writev ...[2024-12-07 10:20:59.876637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bea04000 len:0x1000 00:09:00.640 [2024-12-07 10:20:59.876770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.640 passed 00:09:00.640 Test: blockdev nvme passthru rw ...passed 00:09:00.640 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:59.877774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.640 passed 00:09:00.640 Test: blockdev nvme admin passthru ...[2024-12-07 10:20:59.877880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.640 passed 00:09:00.640 Test: blockdev copy ...passed 00:09:00.640 Suite: bdevio tests on: Nvme2n3 00:09:00.640 Test: blockdev write read block ...passed 00:09:00.640 Test: blockdev write zeroes read block ...passed 00:09:00.640 Test: blockdev write zeroes read no split ...passed 00:09:00.640 Test: blockdev write zeroes read split ...passed 00:09:00.640 Test: blockdev write zeroes read split partial ...passed 00:09:00.640 Test: blockdev reset ...[2024-12-07 10:20:59.951950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:00.640 [2024-12-07 10:20:59.956375] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:00.640 passed 00:09:00.640 Test: blockdev write read 8 blocks ...passed 00:09:00.640 Test: blockdev write read size > 128k ...passed 00:09:00.640 Test: blockdev write read invalid size ...passed 00:09:00.640 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.640 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.640 Test: blockdev write read max offset ...passed 00:09:00.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.640 Test: blockdev writev readv 8 blocks ...passed 00:09:00.640 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.640 Test: blockdev writev readv block ...passed 00:09:00.640 Test: blockdev writev readv size > 128k ...passed 00:09:00.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.640 Test: blockdev comparev and writev ...[2024-12-07 10:20:59.966248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bea02000 len:0x1000 00:09:00.640 passed 00:09:00.640 Test: blockdev nvme passthru rw ...[2024-12-07 10:20:59.966365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.640 passed 00:09:00.640 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:20:59.967413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.640 [2024-12-07 10:20:59.967512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.640 passed 00:09:00.640 Test: blockdev nvme admin passthru ...passed 00:09:00.640 Test: blockdev copy ...passed 00:09:00.640 Suite: bdevio tests on: Nvme2n2 00:09:00.640 Test: blockdev write read block ...passed 00:09:00.640 Test: blockdev write zeroes read block ...passed 00:09:00.640 Test: blockdev write zeroes read no split ...passed 00:09:00.899 Test: blockdev write zeroes read split ...passed 00:09:00.899 Test: blockdev write zeroes read split partial ...passed 00:09:00.899 Test: blockdev reset ...[2024-12-07 10:21:00.048442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:00.899 [2024-12-07 10:21:00.052955] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:00.899 passed 00:09:00.899 Test: blockdev write read 8 blocks ...passed 00:09:00.899 Test: blockdev write read size > 128k ...passed 00:09:00.899 Test: blockdev write read invalid size ...passed 00:09:00.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.899 Test: blockdev write read max offset ...passed 00:09:00.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.899 Test: blockdev writev readv 8 blocks ...passed 00:09:00.899 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.899 Test: blockdev writev readv block ...passed 00:09:00.899 Test: blockdev writev readv size > 128k ...passed 00:09:00.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.899 Test: blockdev comparev and writev ...[2024-12-07 10:21:00.062791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2838000 len:0x1000 00:09:00.899 [2024-12-07 10:21:00.062941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.899 passed 00:09:00.899 Test: blockdev nvme passthru rw ...passed 00:09:00.899 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:21:00.063946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.899 [2024-12-07 10:21:00.064056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.899 passed 00:09:00.899 Test: blockdev nvme admin passthru ...passed 00:09:00.899 Test: blockdev copy ...passed 00:09:00.899 Suite: bdevio tests on: Nvme2n1 00:09:00.899 Test: blockdev write read block ...passed 00:09:00.899 Test: blockdev write zeroes read block ...passed 00:09:00.899 Test: blockdev write zeroes read no split ...passed 00:09:00.899 Test: blockdev write zeroes read split ...passed 00:09:00.899 Test: blockdev write zeroes read split partial ...passed 00:09:00.899 Test: blockdev reset ...[2024-12-07 10:21:00.141132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:00.899 [2024-12-07 10:21:00.145628] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:00.899 passed 00:09:00.899 Test: blockdev write read 8 blocks ...passed 00:09:00.899 Test: blockdev write read size > 128k ...passed 00:09:00.899 Test: blockdev write read invalid size ...passed 00:09:00.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.899 Test: blockdev write read max offset ...passed 00:09:00.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.899 Test: blockdev writev readv 8 blocks ...passed 00:09:00.899 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.899 Test: blockdev writev readv block ...passed 00:09:00.899 Test: blockdev writev readv size > 128k ...passed 00:09:00.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.899 Test: blockdev comparev and writev ...[2024-12-07 10:21:00.155258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2834000 len:0x1000 00:09:00.899 passed 00:09:00.899 Test: blockdev nvme passthru rw ...[2024-12-07 10:21:00.155383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.899 passed 00:09:00.899 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:21:00.156472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.899 passed 00:09:00.899 Test: blockdev nvme admin passthru ...[2024-12-07 10:21:00.156574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.899 passed 00:09:00.899 Test: blockdev copy ...passed 00:09:00.899 Suite: bdevio tests on: Nvme1n1p2 00:09:00.899 Test: blockdev write read block ...passed 00:09:00.899 Test: blockdev write zeroes read block ...passed 00:09:00.899 Test: blockdev write zeroes read no split ...passed 00:09:00.899 Test: blockdev write zeroes read split ...passed 00:09:00.899 Test: blockdev write zeroes read split partial ...passed 00:09:00.899 Test: blockdev reset ...[2024-12-07 10:21:00.237581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:00.899 [2024-12-07 10:21:00.241817] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:00.899 passed 00:09:00.899 Test: blockdev write read 8 blocks ...passed 00:09:00.899 Test: blockdev write read size > 128k ...passed 00:09:00.899 Test: blockdev write read invalid size ...passed 00:09:00.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.899 Test: blockdev write read max offset ...passed 00:09:00.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.899 Test: blockdev writev readv 8 blocks ...passed 00:09:00.899 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.899 Test: blockdev writev readv block ...passed 00:09:00.899 Test: blockdev writev readv size > 128k ...passed 00:09:00.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.159 Test: blockdev comparev and writev ...[2024-12-07 10:21:00.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d2830000 len:0x1000 00:09:01.159 [2024-12-07 10:21:00.251537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:01.159 passed 00:09:01.159 Test: blockdev nvme passthru rw ...passed 00:09:01.159 Test: blockdev nvme passthru vendor specific ...passed 00:09:01.159 Test: blockdev nvme admin passthru ...passed 00:09:01.159 Test: blockdev copy ...passed 00:09:01.159 Suite: bdevio tests on: Nvme1n1p1 00:09:01.159 Test: blockdev write read block ...passed 00:09:01.159 Test: blockdev write zeroes read block ...passed 00:09:01.159 Test: blockdev write zeroes read no split ...passed 00:09:01.159 Test: blockdev write zeroes read split ...passed 00:09:01.159 Test: blockdev write zeroes read split partial ...passed 00:09:01.159 Test: blockdev reset ...[2024-12-07 10:21:00.318951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:01.159 [2024-12-07 10:21:00.322963] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:01.159 passed 00:09:01.159 Test: blockdev write read 8 blocks ...passed 00:09:01.159 Test: blockdev write read size > 128k ...passed 00:09:01.159 Test: blockdev write read invalid size ...passed 00:09:01.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:01.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:01.159 Test: blockdev write read max offset ...passed 00:09:01.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:01.159 Test: blockdev writev readv 8 blocks ...passed 00:09:01.159 Test: blockdev writev readv 30 x 1block ...passed 00:09:01.159 Test: blockdev writev readv block ...passed 00:09:01.159 Test: blockdev writev readv size > 128k ...passed 00:09:01.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.159 Test: blockdev comparev and writev ...[2024-12-07 10:21:00.332551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bec0e000 len:0x1000 00:09:01.159 [2024-12-07 10:21:00.332671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:01.159 passed 00:09:01.159 Test: blockdev nvme passthru rw ...passed 00:09:01.159 Test: blockdev nvme passthru vendor specific ...passed 00:09:01.159 Test: blockdev nvme admin passthru ...passed 00:09:01.159 Test: blockdev copy ...passed 00:09:01.159 Suite: bdevio tests on: Nvme0n1 00:09:01.159 Test: blockdev write read block ...passed 00:09:01.159 Test: blockdev write zeroes read block ...passed 00:09:01.159 Test: blockdev write zeroes read no split ...passed 00:09:01.159 Test: blockdev write zeroes read split ...passed 00:09:01.159 Test: blockdev write zeroes read split partial ...passed 00:09:01.159 Test: blockdev reset ...[2024-12-07 10:21:00.402255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:01.159 [2024-12-07 10:21:00.406200] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:01.159 passed 00:09:01.159 Test: blockdev write read 8 blocks ...passed 00:09:01.159 Test: blockdev write read size > 128k ...passed 00:09:01.159 Test: blockdev write read invalid size ...passed 00:09:01.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:01.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:01.159 Test: blockdev write read max offset ...passed 00:09:01.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:01.159 Test: blockdev writev readv 8 blocks ...passed 00:09:01.159 Test: blockdev writev readv 30 x 1block ...passed 00:09:01.159 Test: blockdev writev readv block ...passed 00:09:01.159 Test: blockdev writev readv size > 128k ...passed 00:09:01.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.159 Test: blockdev comparev and writev ...passed 00:09:01.159 Test: blockdev nvme passthru rw ...[2024-12-07 10:21:00.414542] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:01.159 separate metadata which is not supported yet. 00:09:01.159 passed 00:09:01.159 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:21:00.415326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:01.159 passed 00:09:01.159 Test: blockdev nvme admin passthru ...[2024-12-07 10:21:00.415426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:01.159 passed 00:09:01.159 Test: blockdev copy ...passed 00:09:01.159 00:09:01.159 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.159 suites 7 7 n/a 0 0 00:09:01.159 tests 161 161 161 0 0 00:09:01.159 asserts 1025 1025 1025 0 n/a 00:09:01.159 00:09:01.159 Elapsed time = 1.691 seconds 00:09:01.159 0 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62558 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62558 ']' 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62558 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62558 00:09:01.159 killing process with pid 62558 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62558' 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62558 00:09:01.159 10:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62558 00:09:02.550 ************************************ 00:09:02.550 END TEST bdev_bounds 00:09:02.550 ************************************ 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:02.550 00:09:02.550 real 0m3.062s 00:09:02.550 user 0m7.662s 00:09:02.550 sys 0m0.521s 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:02.550 10:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:02.550 10:21:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.550 10:21:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.550 10:21:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.550 ************************************ 00:09:02.550 START TEST bdev_nbd 00:09:02.550 ************************************ 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:02.550 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62623 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62623 /var/tmp/spdk-nbd.sock 00:09:02.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62623 ']' 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.551 10:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:02.551 [2024-12-07 10:21:01.806362] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:02.551 [2024-12-07 10:21:01.806639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.809 [2024-12-07 10:21:01.990366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.809 [2024-12-07 10:21:02.123383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:03.746 10:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:03.746 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.006 1+0 records in 00:09:04.006 1+0 records out 00:09:04.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687743 s, 6.0 MB/s 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:04.006 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.267 1+0 records in 00:09:04.267 1+0 records out 00:09:04.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689646 s, 5.9 MB/s 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:04.267 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.527 1+0 records in 00:09:04.527 1+0 records out 00:09:04.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635554 s, 6.4 MB/s 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:04.527 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.787 1+0 records in 00:09:04.787 1+0 records out 00:09:04.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068834 s, 6.0 MB/s 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:04.787 10:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.047 1+0 records in 00:09:05.047 1+0 records out 00:09:05.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776779 s, 5.3 MB/s 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:05.047 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.308 1+0 records in 00:09:05.308 1+0 records out 00:09:05.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770423 s, 5.3 MB/s 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:05.308 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.569 1+0 records in 00:09:05.569 1+0 records out 00:09:05.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798923 s, 5.1 MB/s 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:05.569 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.830 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd0", 00:09:05.830 "bdev_name": "Nvme0n1" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd1", 00:09:05.830 "bdev_name": "Nvme1n1p1" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd2", 00:09:05.830 "bdev_name": "Nvme1n1p2" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd3", 00:09:05.830 "bdev_name": "Nvme2n1" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd4", 00:09:05.830 "bdev_name": "Nvme2n2" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd5", 00:09:05.830 "bdev_name": "Nvme2n3" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd6", 00:09:05.830 "bdev_name": "Nvme3n1" 00:09:05.830 } 00:09:05.830 ]' 00:09:05.830 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:05.830 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd0", 00:09:05.830 "bdev_name": "Nvme0n1" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd1", 00:09:05.830 "bdev_name": "Nvme1n1p1" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd2", 00:09:05.830 "bdev_name": "Nvme1n1p2" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd3", 00:09:05.830 "bdev_name": "Nvme2n1" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd4", 00:09:05.830 "bdev_name": "Nvme2n2" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd5", 00:09:05.830 "bdev_name": "Nvme2n3" 00:09:05.830 }, 00:09:05.830 { 00:09:05.830 "nbd_device": "/dev/nbd6", 00:09:05.830 "bdev_name": "Nvme3n1" 00:09:05.830 } 00:09:05.830 ]' 00:09:05.830 10:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.830 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:06.090 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.348 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.607 10:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.866 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:07.125 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:07.125 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:07.125 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.126 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.385 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:07.646 /dev/nbd0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:07.646 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:07.906 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:07.906 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:07.906 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:07.906 10:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.906 1+0 records in 00:09:07.906 1+0 records out 00:09:07.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600943 s, 6.8 MB/s 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:07.906 /dev/nbd1 00:09:07.906 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.167 1+0 records in 00:09:08.167 1+0 records out 00:09:08.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969297 s, 4.2 MB/s 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:08.167 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:08.167 /dev/nbd10 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.427 1+0 records in 00:09:08.427 1+0 records out 00:09:08.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786124 s, 5.2 MB/s 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:08.427 /dev/nbd11 00:09:08.427 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.688 1+0 records in 00:09:08.688 1+0 records out 00:09:08.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811635 s, 5.0 MB/s 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:08.688 10:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:08.688 /dev/nbd12 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.688 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.948 1+0 records in 00:09:08.948 1+0 records out 00:09:08.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063656 s, 6.4 MB/s 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:08.948 /dev/nbd13 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:08.948 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.949 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.949 1+0 records in 00:09:08.949 1+0 records out 00:09:08.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820906 s, 5.0 MB/s 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:09.209 /dev/nbd14 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.209 1+0 records in 00:09:09.209 1+0 records out 00:09:09.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097784 s, 4.2 MB/s 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:09.209 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.469 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:09.469 { 00:09:09.469 "nbd_device": "/dev/nbd0", 00:09:09.469 "bdev_name": "Nvme0n1" 00:09:09.469 }, 00:09:09.469 { 00:09:09.469 "nbd_device": "/dev/nbd1", 00:09:09.469 "bdev_name": "Nvme1n1p1" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd10", 00:09:09.470 "bdev_name": "Nvme1n1p2" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd11", 00:09:09.470 "bdev_name": "Nvme2n1" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd12", 00:09:09.470 "bdev_name": "Nvme2n2" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd13", 00:09:09.470 "bdev_name": "Nvme2n3" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd14", 00:09:09.470 "bdev_name": "Nvme3n1" 00:09:09.470 } 00:09:09.470 ]' 00:09:09.470 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd0", 00:09:09.470 "bdev_name": "Nvme0n1" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd1", 00:09:09.470 "bdev_name": "Nvme1n1p1" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd10", 00:09:09.470 "bdev_name": "Nvme1n1p2" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd11", 00:09:09.470 "bdev_name": "Nvme2n1" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd12", 00:09:09.470 "bdev_name": "Nvme2n2" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd13", 00:09:09.470 "bdev_name": "Nvme2n3" 00:09:09.470 }, 00:09:09.470 { 00:09:09.470 "nbd_device": "/dev/nbd14", 00:09:09.470 "bdev_name": "Nvme3n1" 00:09:09.470 } 00:09:09.470 ]' 00:09:09.470 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:09.730 /dev/nbd1 00:09:09.730 /dev/nbd10 00:09:09.730 /dev/nbd11 00:09:09.730 /dev/nbd12 00:09:09.730 /dev/nbd13 00:09:09.730 /dev/nbd14' 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:09.730 /dev/nbd1 00:09:09.730 /dev/nbd10 00:09:09.730 /dev/nbd11 00:09:09.730 /dev/nbd12 00:09:09.730 /dev/nbd13 00:09:09.730 /dev/nbd14' 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:09.730 256+0 records in 00:09:09.730 256+0 records out 00:09:09.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133332 s, 78.6 MB/s 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.730 10:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:09.730 256+0 records in 00:09:09.730 256+0 records out 00:09:09.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148998 s, 7.0 MB/s 00:09:09.730 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.730 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:09.990 256+0 records in 00:09:09.990 256+0 records out 00:09:09.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156708 s, 6.7 MB/s 00:09:09.990 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.990 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:09.990 256+0 records in 00:09:09.990 256+0 records out 00:09:09.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152612 s, 6.9 MB/s 00:09:09.990 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.990 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:10.250 256+0 records in 00:09:10.250 256+0 records out 00:09:10.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15515 s, 6.8 MB/s 00:09:10.250 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.250 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:10.510 256+0 records in 00:09:10.510 256+0 records out 00:09:10.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150051 s, 7.0 MB/s 00:09:10.510 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.510 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:10.510 256+0 records in 00:09:10.510 256+0 records out 00:09:10.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15103 s, 6.9 MB/s 00:09:10.510 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.510 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:10.771 256+0 records in 00:09:10.771 256+0 records out 00:09:10.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151732 s, 6.9 MB/s 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.771 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.031 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.292 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.550 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:11.808 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:11.808 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.809 10:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.809 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.068 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.327 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:12.586 10:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:12.845 malloc_lvol_verify 00:09:12.845 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:13.105 93f49c13-7a91-446c-b9af-d7573dff9f59 00:09:13.105 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:13.105 c6dacd07-5ff4-4423-b11d-dc72327b4c23 00:09:13.105 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:13.365 /dev/nbd0 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:13.365 mke2fs 1.47.0 (5-Feb-2023) 00:09:13.365 Discarding device blocks: 0/4096 done 00:09:13.365 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:13.365 00:09:13.365 Allocating group tables: 0/1 done 00:09:13.365 Writing inode tables: 0/1 done 00:09:13.365 Creating journal (1024 blocks): done 00:09:13.365 Writing superblocks and filesystem accounting information: 0/1 done 00:09:13.365 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.365 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.625 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62623 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62623 ']' 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62623 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62623 00:09:13.626 killing process with pid 62623 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62623' 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62623 00:09:13.626 10:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62623 00:09:15.008 10:21:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:15.008 00:09:15.008 real 0m12.515s 00:09:15.008 user 0m15.672s 00:09:15.008 sys 0m5.490s 00:09:15.008 10:21:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.008 ************************************ 00:09:15.008 END TEST bdev_nbd 00:09:15.008 ************************************ 00:09:15.008 10:21:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:15.008 10:21:14 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:15.008 10:21:14 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:09:15.008 10:21:14 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:09:15.008 skipping fio tests on NVMe due to multi-ns failures. 00:09:15.008 10:21:14 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:15.008 10:21:14 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:15.008 10:21:14 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:15.008 10:21:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:15.008 10:21:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.008 10:21:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.008 ************************************ 00:09:15.008 START TEST bdev_verify 00:09:15.008 ************************************ 00:09:15.008 10:21:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:15.269 [2024-12-07 10:21:14.392135] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:15.269 [2024-12-07 10:21:14.392249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63051 ] 00:09:15.269 [2024-12-07 10:21:14.574110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.530 [2024-12-07 10:21:14.718179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.530 [2024-12-07 10:21:14.718190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.468 Running I/O for 5 seconds... 00:09:18.778 16960.00 IOPS, 66.25 MiB/s [2024-12-07T10:21:19.097Z] 16896.00 IOPS, 66.00 MiB/s [2024-12-07T10:21:19.665Z] 16874.67 IOPS, 65.92 MiB/s [2024-12-07T10:21:21.043Z] 16720.00 IOPS, 65.31 MiB/s [2024-12-07T10:21:21.043Z] 16883.20 IOPS, 65.95 MiB/s 00:09:21.690 Latency(us) 00:09:21.690 [2024-12-07T10:21:21.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0xbd0bd 00:09:21.690 Nvme0n1 : 5.09 1371.21 5.36 0.00 0.00 92770.70 24108.83 79590.71 00:09:21.690 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:21.690 Nvme0n1 : 5.11 1002.62 3.92 0.00 0.00 127349.25 19055.45 110332.09 00:09:21.690 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0x4ff80 00:09:21.690 Nvme1n1p1 : 5.10 1368.71 5.35 0.00 0.00 92744.24 25582.73 77906.25 00:09:21.690 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:21.690 Nvme1n1p1 : 5.11 1002.25 3.92 0.00 0.00 127018.81 18634.33 106963.17 00:09:21.690 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0x4ff7f 00:09:21.690 Nvme1n1p2 : 5.12 1375.23 5.37 0.00 0.00 92716.49 15897.09 77064.02 00:09:21.690 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:21.690 Nvme1n1p2 : 5.11 1001.87 3.91 0.00 0.00 126670.59 18529.05 108647.63 00:09:21.690 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0x80000 00:09:21.690 Nvme2n1 : 5.12 1374.41 5.37 0.00 0.00 92575.31 16949.87 75379.56 00:09:21.690 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x80000 length 0x80000 00:09:21.690 Nvme2n1 : 5.11 1001.65 3.91 0.00 0.00 126437.21 19266.00 106542.06 00:09:21.690 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0x80000 00:09:21.690 Nvme2n2 : 5.12 1374.09 5.37 0.00 0.00 92497.16 16107.64 75379.56 00:09:21.690 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x80000 length 0x80000 00:09:21.690 Nvme2n2 : 5.11 1001.44 3.91 0.00 0.00 126328.60 19160.73 102752.03 00:09:21.690 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0x80000 00:09:21.690 Nvme2n3 : 5.12 1373.82 5.37 0.00 0.00 92398.19 15160.13 77064.02 00:09:21.690 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x80000 length 0x80000 00:09:21.690 Nvme2n3 : 5.12 1000.93 3.91 0.00 0.00 126246.33 18950.17 106120.94 00:09:21.690 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x0 length 0x20000 00:09:21.690 Nvme3n1 : 5.13 1373.54 5.37 0.00 0.00 92293.71 14212.63 79590.71 00:09:21.690 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.690 Verification LBA range: start 0x20000 length 0x20000 00:09:21.690 Nvme3n1 : 5.12 1000.42 3.91 0.00 0.00 126154.62 19055.45 109489.86 00:09:21.690 [2024-12-07T10:21:21.043Z] =================================================================================================================== 00:09:21.690 [2024-12-07T10:21:21.043Z] Total : 16622.20 64.93 0.00 0.00 106920.52 14212.63 110332.09 00:09:23.069 00:09:23.069 real 0m7.813s 00:09:23.069 user 0m14.321s 00:09:23.069 sys 0m0.403s 00:09:23.069 ************************************ 00:09:23.069 END TEST bdev_verify 00:09:23.069 ************************************ 00:09:23.069 10:21:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.069 10:21:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:23.069 10:21:22 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.069 10:21:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:23.069 10:21:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.069 10:21:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:23.069 ************************************ 00:09:23.069 START TEST bdev_verify_big_io 00:09:23.069 ************************************ 00:09:23.069 10:21:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.069 [2024-12-07 10:21:22.303060] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:23.069 [2024-12-07 10:21:22.303192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:09:23.329 [2024-12-07 10:21:22.490427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.329 [2024-12-07 10:21:22.632261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.329 [2024-12-07 10:21:22.632286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.275 Running I/O for 5 seconds... 00:09:30.178 2077.00 IOPS, 129.81 MiB/s [2024-12-07T10:21:29.531Z] 3985.50 IOPS, 249.09 MiB/s [2024-12-07T10:21:30.098Z] 3726.00 IOPS, 232.87 MiB/s 00:09:30.745 Latency(us) 00:09:30.745 [2024-12-07T10:21:30.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.745 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.745 Verification LBA range: start 0x0 length 0xbd0b 00:09:30.745 Nvme0n1 : 5.49 183.61 11.48 0.00 0.00 678610.04 20318.79 896132.42 00:09:30.745 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.745 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:30.745 Nvme0n1 : 5.62 93.34 5.83 0.00 0.00 1309788.53 15160.13 1569916.20 00:09:30.745 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.745 Verification LBA range: start 0x0 length 0x4ff8 00:09:30.745 Nvme1n1p1 : 5.53 203.10 12.69 0.00 0.00 607448.25 39584.80 616512.15 00:09:30.745 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.745 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:30.746 Nvme1n1p1 : 5.68 101.35 6.33 0.00 0.00 1152851.64 53902.70 1212810.80 00:09:30.746 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x0 length 0x4ff7 00:09:30.746 Nvme1n1p2 : 5.53 203.93 12.75 0.00 0.00 596754.29 40637.58 559240.53 00:09:30.746 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:30.746 Nvme1n1p2 : 5.79 110.64 6.92 0.00 0.00 1018216.97 38532.01 1098267.55 00:09:30.746 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x0 length 0x8000 00:09:30.746 Nvme2n1 : 5.53 203.71 12.73 0.00 0.00 587025.05 40637.58 633356.75 00:09:30.746 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x8000 length 0x8000 00:09:30.746 Nvme2n1 : 5.82 117.30 7.33 0.00 0.00 931256.05 26740.79 1529489.17 00:09:30.746 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x0 length 0x8000 00:09:30.746 Nvme2n2 : 5.57 207.34 12.96 0.00 0.00 567677.15 33899.75 616512.15 00:09:30.746 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x8000 length 0x8000 00:09:30.746 Nvme2n2 : 6.01 146.00 9.13 0.00 0.00 721572.70 20318.79 2210010.78 00:09:30.746 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x0 length 0x8000 00:09:30.746 Nvme2n3 : 5.57 210.86 13.18 0.00 0.00 551436.55 28004.14 626618.91 00:09:30.746 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x8000 length 0x8000 00:09:30.746 Nvme2n3 : 6.21 197.72 12.36 0.00 0.00 517892.53 8843.41 2250437.81 00:09:30.746 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x0 length 0x2000 00:09:30.746 Nvme3n1 : 5.59 225.23 14.08 0.00 0.00 510120.10 5342.89 640094.59 00:09:30.746 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.746 Verification LBA range: start 0x2000 length 0x2000 00:09:30.746 Nvme3n1 : 6.30 251.39 15.71 0.00 0.00 396103.66 694.18 2061778.35 00:09:30.746 [2024-12-07T10:21:30.099Z] =================================================================================================================== 00:09:30.746 [2024-12-07T10:21:30.099Z] Total : 2455.53 153.47 0.00 0.00 651840.43 694.18 2250437.81 00:09:32.647 00:09:32.647 real 0m9.748s 00:09:32.647 user 0m18.092s 00:09:32.647 sys 0m0.464s 00:09:32.647 ************************************ 00:09:32.647 END TEST bdev_verify_big_io 00:09:32.647 ************************************ 00:09:32.647 10:21:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.647 10:21:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:32.647 10:21:31 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:32.647 10:21:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:32.647 10:21:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.647 10:21:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:32.905 ************************************ 00:09:32.905 START TEST bdev_write_zeroes 00:09:32.905 ************************************ 00:09:32.905 10:21:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:32.905 [2024-12-07 10:21:32.105369] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:32.905 [2024-12-07 10:21:32.105513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:09:33.164 [2024-12-07 10:21:32.288603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.164 [2024-12-07 10:21:32.415497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.099 Running I/O for 1 seconds... 00:09:35.034 76608.00 IOPS, 299.25 MiB/s 00:09:35.034 Latency(us) 00:09:35.034 [2024-12-07T10:21:34.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.034 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme0n1 : 1.02 10903.53 42.59 0.00 0.00 11709.45 10264.67 29478.04 00:09:35.034 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme1n1p1 : 1.02 10892.76 42.55 0.00 0.00 11705.43 10475.23 29899.16 00:09:35.034 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme1n1p2 : 1.02 10882.14 42.51 0.00 0.00 11674.98 10054.12 27161.91 00:09:35.034 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme2n1 : 1.03 10908.80 42.61 0.00 0.00 11595.51 7053.67 23056.04 00:09:35.034 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme2n2 : 1.02 10869.72 42.46 0.00 0.00 11610.99 10212.04 22213.81 00:09:35.034 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme2n3 : 1.03 10860.17 42.42 0.00 0.00 11595.93 9948.84 21897.97 00:09:35.034 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:35.034 Nvme3n1 : 1.03 10898.78 42.57 0.00 0.00 11541.67 6606.24 21055.74 00:09:35.034 [2024-12-07T10:21:34.387Z] =================================================================================================================== 00:09:35.034 [2024-12-07T10:21:34.387Z] Total : 76215.91 297.72 0.00 0.00 11633.32 6606.24 29899.16 00:09:36.427 00:09:36.427 real 0m3.388s 00:09:36.427 user 0m2.907s 00:09:36.427 sys 0m0.364s 00:09:36.427 10:21:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.427 10:21:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:36.427 ************************************ 00:09:36.427 END TEST bdev_write_zeroes 00:09:36.427 ************************************ 00:09:36.427 10:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.427 10:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:36.427 10:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.427 10:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.427 ************************************ 00:09:36.427 START TEST bdev_json_nonenclosed 00:09:36.427 ************************************ 00:09:36.427 10:21:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.427 [2024-12-07 10:21:35.560904] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:36.427 [2024-12-07 10:21:35.561023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63334 ] 00:09:36.427 [2024-12-07 10:21:35.739695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.686 [2024-12-07 10:21:35.848119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.686 [2024-12-07 10:21:35.848208] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:36.686 [2024-12-07 10:21:35.848230] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:36.686 [2024-12-07 10:21:35.848242] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.945 00:09:36.945 real 0m0.618s 00:09:36.945 user 0m0.385s 00:09:36.945 sys 0m0.129s 00:09:36.945 10:21:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.945 ************************************ 00:09:36.945 END TEST bdev_json_nonenclosed 00:09:36.945 ************************************ 00:09:36.945 10:21:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:36.945 10:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.945 10:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:36.945 10:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.945 10:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.945 ************************************ 00:09:36.945 START TEST bdev_json_nonarray 00:09:36.945 ************************************ 00:09:36.945 10:21:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.945 [2024-12-07 10:21:36.266925] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:36.945 [2024-12-07 10:21:36.267130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:09:37.205 [2024-12-07 10:21:36.467592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.465 [2024-12-07 10:21:36.575492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.465 [2024-12-07 10:21:36.575591] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:37.465 [2024-12-07 10:21:36.575612] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:37.465 [2024-12-07 10:21:36.575624] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.725 00:09:37.725 real 0m0.653s 00:09:37.725 user 0m0.381s 00:09:37.725 sys 0m0.167s 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.725 ************************************ 00:09:37.725 END TEST bdev_json_nonarray 00:09:37.725 ************************************ 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:37.725 10:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:09:37.725 10:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:09:37.725 10:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:37.725 10:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.725 10:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.725 10:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:37.725 ************************************ 00:09:37.725 START TEST bdev_gpt_uuid 00:09:37.725 ************************************ 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63386 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63386 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63386 ']' 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.725 10:21:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:37.725 [2024-12-07 10:21:37.021938] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:37.725 [2024-12-07 10:21:37.022118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:09:37.984 [2024-12-07 10:21:37.200417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.985 [2024-12-07 10:21:37.308673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.922 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.922 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:38.922 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:38.922 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.922 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:39.182 Some configs were skipped because the RPC state that can call them passed over. 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:09:39.182 { 00:09:39.182 "name": "Nvme1n1p1", 00:09:39.182 "aliases": [ 00:09:39.182 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:39.182 ], 00:09:39.182 "product_name": "GPT Disk", 00:09:39.182 "block_size": 4096, 00:09:39.182 "num_blocks": 655104, 00:09:39.182 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:39.182 "assigned_rate_limits": { 00:09:39.182 "rw_ios_per_sec": 0, 00:09:39.182 "rw_mbytes_per_sec": 0, 00:09:39.182 "r_mbytes_per_sec": 0, 00:09:39.182 "w_mbytes_per_sec": 0 00:09:39.182 }, 00:09:39.182 "claimed": false, 00:09:39.182 "zoned": false, 00:09:39.182 "supported_io_types": { 00:09:39.182 "read": true, 00:09:39.182 "write": true, 00:09:39.182 "unmap": true, 00:09:39.182 "flush": true, 00:09:39.182 "reset": true, 00:09:39.182 "nvme_admin": false, 00:09:39.182 "nvme_io": false, 00:09:39.182 "nvme_io_md": false, 00:09:39.182 "write_zeroes": true, 00:09:39.182 "zcopy": false, 00:09:39.182 "get_zone_info": false, 00:09:39.182 "zone_management": false, 00:09:39.182 "zone_append": false, 00:09:39.182 "compare": true, 00:09:39.182 "compare_and_write": false, 00:09:39.182 "abort": true, 00:09:39.182 "seek_hole": false, 00:09:39.182 "seek_data": false, 00:09:39.182 "copy": true, 00:09:39.182 "nvme_iov_md": false 00:09:39.182 }, 00:09:39.182 "driver_specific": { 00:09:39.182 "gpt": { 00:09:39.182 "base_bdev": "Nvme1n1", 00:09:39.182 "offset_blocks": 256, 00:09:39.182 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:39.182 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:39.182 "partition_name": "SPDK_TEST_first" 00:09:39.182 } 00:09:39.182 } 00:09:39.182 } 00:09:39.182 ]' 00:09:39.182 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:09:39.443 { 00:09:39.443 "name": "Nvme1n1p2", 00:09:39.443 "aliases": [ 00:09:39.443 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:39.443 ], 00:09:39.443 "product_name": "GPT Disk", 00:09:39.443 "block_size": 4096, 00:09:39.443 "num_blocks": 655103, 00:09:39.443 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:39.443 "assigned_rate_limits": { 00:09:39.443 "rw_ios_per_sec": 0, 00:09:39.443 "rw_mbytes_per_sec": 0, 00:09:39.443 "r_mbytes_per_sec": 0, 00:09:39.443 "w_mbytes_per_sec": 0 00:09:39.443 }, 00:09:39.443 "claimed": false, 00:09:39.443 "zoned": false, 00:09:39.443 "supported_io_types": { 00:09:39.443 "read": true, 00:09:39.443 "write": true, 00:09:39.443 "unmap": true, 00:09:39.443 "flush": true, 00:09:39.443 "reset": true, 00:09:39.443 "nvme_admin": false, 00:09:39.443 "nvme_io": false, 00:09:39.443 "nvme_io_md": false, 00:09:39.443 "write_zeroes": true, 00:09:39.443 "zcopy": false, 00:09:39.443 "get_zone_info": false, 00:09:39.443 "zone_management": false, 00:09:39.443 "zone_append": false, 00:09:39.443 "compare": true, 00:09:39.443 "compare_and_write": false, 00:09:39.443 "abort": true, 00:09:39.443 "seek_hole": false, 00:09:39.443 "seek_data": false, 00:09:39.443 "copy": true, 00:09:39.443 "nvme_iov_md": false 00:09:39.443 }, 00:09:39.443 "driver_specific": { 00:09:39.443 "gpt": { 00:09:39.443 "base_bdev": "Nvme1n1", 00:09:39.443 "offset_blocks": 655360, 00:09:39.443 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:39.443 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:39.443 "partition_name": "SPDK_TEST_second" 00:09:39.443 } 00:09:39.443 } 00:09:39.443 } 00:09:39.443 ]' 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:39.443 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:39.444 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63386 00:09:39.444 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63386 ']' 00:09:39.444 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63386 00:09:39.444 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:39.444 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.444 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63386 00:09:39.703 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.703 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.703 killing process with pid 63386 00:09:39.703 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63386' 00:09:39.703 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63386 00:09:39.703 10:21:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63386 00:09:42.242 00:09:42.242 real 0m4.192s 00:09:42.242 user 0m4.248s 00:09:42.242 sys 0m0.577s 00:09:42.242 10:21:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.242 10:21:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:42.242 ************************************ 00:09:42.242 END TEST bdev_gpt_uuid 00:09:42.242 ************************************ 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:42.242 10:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.760 Waiting for block devices as requested 00:09:42.760 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.019 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.019 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.278 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.557 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:48.557 10:21:47 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:48.557 10:21:47 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:48.557 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:48.557 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:48.557 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:48.557 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:48.557 10:21:47 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:48.557 00:09:48.557 real 1m6.695s 00:09:48.557 user 1m21.941s 00:09:48.558 sys 0m13.185s 00:09:48.558 ************************************ 00:09:48.558 END TEST blockdev_nvme_gpt 00:09:48.558 ************************************ 00:09:48.558 10:21:47 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.558 10:21:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.558 10:21:47 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:48.558 10:21:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.558 10:21:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.558 10:21:47 -- common/autotest_common.sh@10 -- # set +x 00:09:48.558 ************************************ 00:09:48.558 START TEST nvme 00:09:48.558 ************************************ 00:09:48.558 10:21:47 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:48.817 * Looking for test storage... 00:09:48.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.817 10:21:48 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.817 10:21:48 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.817 10:21:48 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.817 10:21:48 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.817 10:21:48 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.817 10:21:48 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.817 10:21:48 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.817 10:21:48 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.817 10:21:48 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.817 10:21:48 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.817 10:21:48 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.817 10:21:48 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.817 10:21:48 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.817 10:21:48 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.817 10:21:48 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.817 10:21:48 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:48.817 10:21:48 nvme -- scripts/common.sh@345 -- # : 1 00:09:48.817 10:21:48 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.818 10:21:48 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.818 10:21:48 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:48.818 10:21:48 nvme -- scripts/common.sh@353 -- # local d=1 00:09:48.818 10:21:48 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.818 10:21:48 nvme -- scripts/common.sh@355 -- # echo 1 00:09:48.818 10:21:48 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.818 10:21:48 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:48.818 10:21:48 nvme -- scripts/common.sh@353 -- # local d=2 00:09:48.818 10:21:48 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.818 10:21:48 nvme -- scripts/common.sh@355 -- # echo 2 00:09:48.818 10:21:48 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.818 10:21:48 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.818 10:21:48 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.818 10:21:48 nvme -- scripts/common.sh@368 -- # return 0 00:09:48.818 10:21:48 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.818 10:21:48 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.818 --rc genhtml_branch_coverage=1 00:09:48.818 --rc genhtml_function_coverage=1 00:09:48.818 --rc genhtml_legend=1 00:09:48.818 --rc geninfo_all_blocks=1 00:09:48.818 --rc geninfo_unexecuted_blocks=1 00:09:48.818 00:09:48.818 ' 00:09:48.818 10:21:48 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.818 --rc genhtml_branch_coverage=1 00:09:48.818 --rc genhtml_function_coverage=1 00:09:48.818 --rc genhtml_legend=1 00:09:48.818 --rc geninfo_all_blocks=1 00:09:48.818 --rc geninfo_unexecuted_blocks=1 00:09:48.818 00:09:48.818 ' 00:09:48.818 10:21:48 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.818 --rc genhtml_branch_coverage=1 00:09:48.818 --rc genhtml_function_coverage=1 00:09:48.818 --rc genhtml_legend=1 00:09:48.818 --rc geninfo_all_blocks=1 00:09:48.818 --rc geninfo_unexecuted_blocks=1 00:09:48.818 00:09:48.818 ' 00:09:48.818 10:21:48 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.818 --rc genhtml_branch_coverage=1 00:09:48.818 --rc genhtml_function_coverage=1 00:09:48.818 --rc genhtml_legend=1 00:09:48.818 --rc geninfo_all_blocks=1 00:09:48.818 --rc geninfo_unexecuted_blocks=1 00:09:48.818 00:09:48.818 ' 00:09:48.818 10:21:48 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.329 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.589 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.589 10:21:49 nvme -- nvme/nvme.sh@79 -- # uname 00:09:50.589 10:21:49 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:50.589 10:21:49 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:50.589 10:21:49 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:50.589 Waiting for stub to ready for secondary processes... 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1075 -- # stubpid=64055 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64055 ]] 00:09:50.589 10:21:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:50.589 [2024-12-07 10:21:49.919472] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:50.589 [2024-12-07 10:21:49.919744] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:51.527 10:21:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:51.527 10:21:50 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64055 ]] 00:09:51.527 10:21:50 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:51.785 [2024-12-07 10:21:50.950021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.785 [2024-12-07 10:21:51.080500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.785 [2024-12-07 10:21:51.080644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.785 [2024-12-07 10:21:51.080677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.785 [2024-12-07 10:21:51.098792] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:51.785 [2024-12-07 10:21:51.099046] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.785 [2024-12-07 10:21:51.116040] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:51.785 [2024-12-07 10:21:51.116300] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:51.785 [2024-12-07 10:21:51.121845] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.785 [2024-12-07 10:21:51.122860] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:51.785 [2024-12-07 10:21:51.123616] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:51.785 [2024-12-07 10:21:51.131816] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.785 [2024-12-07 10:21:51.132317] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:51.785 [2024-12-07 10:21:51.132664] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:52.043 [2024-12-07 10:21:51.138120] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:52.043 [2024-12-07 10:21:51.138650] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:52.043 [2024-12-07 10:21:51.138951] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:52.043 [2024-12-07 10:21:51.139223] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:52.043 [2024-12-07 10:21:51.139490] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:52.610 10:21:51 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:52.610 10:21:51 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:52.610 done. 00:09:52.610 10:21:51 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:52.610 10:21:51 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:52.610 10:21:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.610 10:21:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.610 ************************************ 00:09:52.610 START TEST nvme_reset 00:09:52.610 ************************************ 00:09:52.610 10:21:51 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:52.868 Initializing NVMe Controllers 00:09:52.868 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:52.868 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:52.868 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:52.868 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:52.868 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:52.868 00:09:52.868 real 0m0.304s 00:09:52.868 user 0m0.095s 00:09:52.868 sys 0m0.168s 00:09:52.868 10:21:52 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.868 10:21:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:52.868 ************************************ 00:09:52.868 END TEST nvme_reset 00:09:52.868 ************************************ 00:09:53.126 10:21:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:53.126 10:21:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.126 10:21:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.126 10:21:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.126 ************************************ 00:09:53.126 START TEST nvme_identify 00:09:53.126 ************************************ 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:53.126 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:53.126 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:53.126 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:53.126 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:53.126 10:21:52 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:53.126 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:53.391 [2024-12-07 10:21:52.655810] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64089 terminated unexpected 00:09:53.391 ===================================================== 00:09:53.391 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.391 ===================================================== 00:09:53.391 Controller Capabilities/Features 00:09:53.391 ================================ 00:09:53.391 Vendor ID: 1b36 00:09:53.391 Subsystem Vendor ID: 1af4 00:09:53.391 Serial Number: 12340 00:09:53.391 Model Number: QEMU NVMe Ctrl 00:09:53.391 Firmware Version: 8.0.0 00:09:53.391 Recommended Arb Burst: 6 00:09:53.391 IEEE OUI Identifier: 00 54 52 00:09:53.391 Multi-path I/O 00:09:53.391 May have multiple subsystem ports: No 00:09:53.391 May have multiple controllers: No 00:09:53.391 Associated with SR-IOV VF: No 00:09:53.391 Max Data Transfer Size: 524288 00:09:53.391 Max Number of Namespaces: 256 00:09:53.391 Max Number of I/O Queues: 64 00:09:53.391 NVMe Specification Version (VS): 1.4 00:09:53.391 NVMe Specification Version (Identify): 1.4 00:09:53.391 Maximum Queue Entries: 2048 00:09:53.391 Contiguous Queues Required: Yes 00:09:53.391 Arbitration Mechanisms Supported 00:09:53.391 Weighted Round Robin: Not Supported 00:09:53.391 Vendor Specific: Not Supported 00:09:53.391 Reset Timeout: 7500 ms 00:09:53.391 Doorbell Stride: 4 bytes 00:09:53.391 NVM Subsystem Reset: Not Supported 00:09:53.391 Command Sets Supported 00:09:53.391 NVM Command Set: Supported 00:09:53.391 Boot Partition: Not Supported 00:09:53.391 Memory Page Size Minimum: 4096 bytes 00:09:53.391 Memory Page Size Maximum: 65536 bytes 00:09:53.391 Persistent Memory Region: Not Supported 00:09:53.391 Optional Asynchronous Events Supported 00:09:53.391 Namespace Attribute Notices: Supported 00:09:53.391 Firmware Activation Notices: Not Supported 00:09:53.391 ANA Change Notices: Not Supported 00:09:53.391 PLE Aggregate Log Change Notices: Not Supported 00:09:53.391 LBA Status Info Alert Notices: Not Supported 00:09:53.391 EGE Aggregate Log Change Notices: Not Supported 00:09:53.391 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.391 Zone Descriptor Change Notices: Not Supported 00:09:53.391 Discovery Log Change Notices: Not Supported 00:09:53.391 Controller Attributes 00:09:53.391 128-bit Host Identifier: Not Supported 00:09:53.391 Non-Operational Permissive Mode: Not Supported 00:09:53.391 NVM Sets: Not Supported 00:09:53.391 Read Recovery Levels: Not Supported 00:09:53.391 Endurance Groups: Not Supported 00:09:53.391 Predictable Latency Mode: Not Supported 00:09:53.391 Traffic Based Keep ALive: Not Supported 00:09:53.391 Namespace Granularity: Not Supported 00:09:53.391 SQ Associations: Not Supported 00:09:53.391 UUID List: Not Supported 00:09:53.391 Multi-Domain Subsystem: Not Supported 00:09:53.391 Fixed Capacity Management: Not Supported 00:09:53.391 Variable Capacity Management: Not Supported 00:09:53.391 Delete Endurance Group: Not Supported 00:09:53.391 Delete NVM Set: Not Supported 00:09:53.391 Extended LBA Formats Supported: Supported 00:09:53.391 Flexible Data Placement Supported: Not Supported 00:09:53.391 00:09:53.391 Controller Memory Buffer Support 00:09:53.391 ================================ 00:09:53.391 Supported: No 00:09:53.391 00:09:53.391 Persistent Memory Region Support 00:09:53.391 ================================ 00:09:53.391 Supported: No 00:09:53.391 00:09:53.391 Admin Command Set Attributes 00:09:53.391 ============================ 00:09:53.391 Security Send/Receive: Not Supported 00:09:53.391 Format NVM: Supported 00:09:53.391 Firmware Activate/Download: Not Supported 00:09:53.391 Namespace Management: Supported 00:09:53.391 Device Self-Test: Not Supported 00:09:53.391 Directives: Supported 00:09:53.391 NVMe-MI: Not Supported 00:09:53.391 Virtualization Management: Not Supported 00:09:53.391 Doorbell Buffer Config: Supported 00:09:53.391 Get LBA Status Capability: Not Supported 00:09:53.391 Command & Feature Lockdown Capability: Not Supported 00:09:53.391 Abort Command Limit: 4 00:09:53.391 Async Event Request Limit: 4 00:09:53.391 Number of Firmware Slots: N/A 00:09:53.391 Firmware Slot 1 Read-Only: N/A 00:09:53.391 Firmware Activation Without Reset: N/A 00:09:53.391 Multiple Update Detection Support: N/A 00:09:53.391 Firmware Update Granularity: No Information Provided 00:09:53.391 Per-Namespace SMART Log: Yes 00:09:53.391 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.391 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:53.391 Command Effects Log Page: Supported 00:09:53.391 Get Log Page Extended Data: Supported 00:09:53.391 Telemetry Log Pages: Not Supported 00:09:53.391 Persistent Event Log Pages: Not Supported 00:09:53.391 Supported Log Pages Log Page: May Support 00:09:53.391 Commands Supported & Effects Log Page: Not Supported 00:09:53.391 Feature Identifiers & Effects Log Page:May Support 00:09:53.391 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.391 Data Area 4 for Telemetry Log: Not Supported 00:09:53.391 Error Log Page Entries Supported: 1 00:09:53.391 Keep Alive: Not Supported 00:09:53.391 00:09:53.391 NVM Command Set Attributes 00:09:53.391 ========================== 00:09:53.391 Submission Queue Entry Size 00:09:53.391 Max: 64 00:09:53.391 Min: 64 00:09:53.391 Completion Queue Entry Size 00:09:53.391 Max: 16 00:09:53.391 Min: 16 00:09:53.391 Number of Namespaces: 256 00:09:53.391 Compare Command: Supported 00:09:53.391 Write Uncorrectable Command: Not Supported 00:09:53.391 Dataset Management Command: Supported 00:09:53.391 Write Zeroes Command: Supported 00:09:53.391 Set Features Save Field: Supported 00:09:53.391 Reservations: Not Supported 00:09:53.391 Timestamp: Supported 00:09:53.391 Copy: Supported 00:09:53.391 Volatile Write Cache: Present 00:09:53.391 Atomic Write Unit (Normal): 1 00:09:53.391 Atomic Write Unit (PFail): 1 00:09:53.391 Atomic Compare & Write Unit: 1 00:09:53.391 Fused Compare & Write: Not Supported 00:09:53.391 Scatter-Gather List 00:09:53.391 SGL Command Set: Supported 00:09:53.391 SGL Keyed: Not Supported 00:09:53.391 SGL Bit Bucket Descriptor: Not Supported 00:09:53.391 SGL Metadata Pointer: Not Supported 00:09:53.391 Oversized SGL: Not Supported 00:09:53.391 SGL Metadata Address: Not Supported 00:09:53.391 SGL Offset: Not Supported 00:09:53.391 Transport SGL Data Block: Not Supported 00:09:53.391 Replay Protected Memory Block: Not Supported 00:09:53.391 00:09:53.391 Firmware Slot Information 00:09:53.391 ========================= 00:09:53.391 Active slot: 1 00:09:53.391 Slot 1 Firmware Revision: 1.0 00:09:53.391 00:09:53.391 00:09:53.391 Commands Supported and Effects 00:09:53.391 ============================== 00:09:53.391 Admin Commands 00:09:53.391 -------------- 00:09:53.391 Delete I/O Submission Queue (00h): Supported 00:09:53.391 Create I/O Submission Queue (01h): Supported 00:09:53.391 Get Log Page (02h): Supported 00:09:53.391 Delete I/O Completion Queue (04h): Supported 00:09:53.391 Create I/O Completion Queue (05h): Supported 00:09:53.391 Identify (06h): Supported 00:09:53.391 Abort (08h): Supported 00:09:53.391 Set Features (09h): Supported 00:09:53.391 Get Features (0Ah): Supported 00:09:53.391 Asynchronous Event Request (0Ch): Supported 00:09:53.391 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.391 Directive Send (19h): Supported 00:09:53.391 Directive Receive (1Ah): Supported 00:09:53.391 Virtualization Management (1Ch): Supported 00:09:53.391 Doorbell Buffer Config (7Ch): Supported 00:09:53.391 Format NVM (80h): Supported LBA-Change 00:09:53.391 I/O Commands 00:09:53.391 ------------ 00:09:53.391 Flush (00h): Supported LBA-Change 00:09:53.391 Write (01h): Supported LBA-Change 00:09:53.391 Read (02h): Supported 00:09:53.391 Compare (05h): Supported 00:09:53.391 Write Zeroes (08h): Supported LBA-Change 00:09:53.391 Dataset Management (09h): Supported LBA-Change 00:09:53.392 Unknown (0Ch): Supported 00:09:53.392 Unknown (12h): Supported 00:09:53.392 Copy (19h): Supported LBA-Change 00:09:53.392 Unknown (1Dh): Supported LBA-Change 00:09:53.392 00:09:53.392 Error Log 00:09:53.392 ========= 00:09:53.392 00:09:53.392 Arbitration 00:09:53.392 =========== 00:09:53.392 Arbitration Burst: no limit 00:09:53.392 00:09:53.392 Power Management 00:09:53.392 ================ 00:09:53.392 Number of Power States: 1 00:09:53.392 Current Power State: Power State #0 00:09:53.392 Power State #0: 00:09:53.392 Max Power: 25.00 W 00:09:53.392 Non-Operational State: Operational 00:09:53.392 Entry Latency: 16 microseconds 00:09:53.392 Exit Latency: 4 microseconds 00:09:53.392 Relative Read Throughput: 0 00:09:53.392 Relative Read Latency: 0 00:09:53.392 Relative Write Throughput: 0 00:09:53.392 Relative Write Latency: 0 00:09:53.392 Idle Power[2024-12-07 10:21:52.657065] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64089 terminated unexpected 00:09:53.392 : Not Reported 00:09:53.392 Active Power: Not Reported 00:09:53.392 Non-Operational Permissive Mode: Not Supported 00:09:53.392 00:09:53.392 Health Information 00:09:53.392 ================== 00:09:53.392 Critical Warnings: 00:09:53.392 Available Spare Space: OK 00:09:53.392 Temperature: OK 00:09:53.392 Device Reliability: OK 00:09:53.392 Read Only: No 00:09:53.392 Volatile Memory Backup: OK 00:09:53.392 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.392 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.392 Available Spare: 0% 00:09:53.392 Available Spare Threshold: 0% 00:09:53.392 Life Percentage Used: 0% 00:09:53.392 Data Units Read: 762 00:09:53.392 Data Units Written: 690 00:09:53.392 Host Read Commands: 34353 00:09:53.392 Host Write Commands: 34139 00:09:53.392 Controller Busy Time: 0 minutes 00:09:53.392 Power Cycles: 0 00:09:53.392 Power On Hours: 0 hours 00:09:53.392 Unsafe Shutdowns: 0 00:09:53.392 Unrecoverable Media Errors: 0 00:09:53.392 Lifetime Error Log Entries: 0 00:09:53.392 Warning Temperature Time: 0 minutes 00:09:53.392 Critical Temperature Time: 0 minutes 00:09:53.392 00:09:53.392 Number of Queues 00:09:53.392 ================ 00:09:53.392 Number of I/O Submission Queues: 64 00:09:53.392 Number of I/O Completion Queues: 64 00:09:53.392 00:09:53.392 ZNS Specific Controller Data 00:09:53.392 ============================ 00:09:53.392 Zone Append Size Limit: 0 00:09:53.392 00:09:53.392 00:09:53.392 Active Namespaces 00:09:53.392 ================= 00:09:53.392 Namespace ID:1 00:09:53.392 Error Recovery Timeout: Unlimited 00:09:53.392 Command Set Identifier: NVM (00h) 00:09:53.392 Deallocate: Supported 00:09:53.392 Deallocated/Unwritten Error: Supported 00:09:53.392 Deallocated Read Value: All 0x00 00:09:53.392 Deallocate in Write Zeroes: Not Supported 00:09:53.392 Deallocated Guard Field: 0xFFFF 00:09:53.392 Flush: Supported 00:09:53.392 Reservation: Not Supported 00:09:53.392 Metadata Transferred as: Separate Metadata Buffer 00:09:53.392 Namespace Sharing Capabilities: Private 00:09:53.392 Size (in LBAs): 1548666 (5GiB) 00:09:53.392 Capacity (in LBAs): 1548666 (5GiB) 00:09:53.392 Utilization (in LBAs): 1548666 (5GiB) 00:09:53.392 Thin Provisioning: Not Supported 00:09:53.392 Per-NS Atomic Units: No 00:09:53.392 Maximum Single Source Range Length: 128 00:09:53.392 Maximum Copy Length: 128 00:09:53.392 Maximum Source Range Count: 128 00:09:53.392 NGUID/EUI64 Never Reused: No 00:09:53.392 Namespace Write Protected: No 00:09:53.392 Number of LBA Formats: 8 00:09:53.392 Current LBA Format: LBA Format #07 00:09:53.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.392 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.392 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.392 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.392 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.392 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.392 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.392 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.392 00:09:53.392 NVM Specific Namespace Data 00:09:53.392 =========================== 00:09:53.392 Logical Block Storage Tag Mask: 0 00:09:53.392 Protection Information Capabilities: 00:09:53.392 16b Guard Protection Information Storage Tag Support: No 00:09:53.392 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.392 Storage Tag Check Read Support: No 00:09:53.392 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.392 ===================================================== 00:09:53.392 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.392 ===================================================== 00:09:53.392 Controller Capabilities/Features 00:09:53.392 ================================ 00:09:53.392 Vendor ID: 1b36 00:09:53.392 Subsystem Vendor ID: 1af4 00:09:53.392 Serial Number: 12341 00:09:53.392 Model Number: QEMU NVMe Ctrl 00:09:53.392 Firmware Version: 8.0.0 00:09:53.392 Recommended Arb Burst: 6 00:09:53.392 IEEE OUI Identifier: 00 54 52 00:09:53.392 Multi-path I/O 00:09:53.392 May have multiple subsystem ports: No 00:09:53.392 May have multiple controllers: No 00:09:53.392 Associated with SR-IOV VF: No 00:09:53.392 Max Data Transfer Size: 524288 00:09:53.392 Max Number of Namespaces: 256 00:09:53.392 Max Number of I/O Queues: 64 00:09:53.392 NVMe Specification Version (VS): 1.4 00:09:53.392 NVMe Specification Version (Identify): 1.4 00:09:53.392 Maximum Queue Entries: 2048 00:09:53.392 Contiguous Queues Required: Yes 00:09:53.392 Arbitration Mechanisms Supported 00:09:53.392 Weighted Round Robin: Not Supported 00:09:53.392 Vendor Specific: Not Supported 00:09:53.392 Reset Timeout: 7500 ms 00:09:53.392 Doorbell Stride: 4 bytes 00:09:53.392 NVM Subsystem Reset: Not Supported 00:09:53.392 Command Sets Supported 00:09:53.392 NVM Command Set: Supported 00:09:53.392 Boot Partition: Not Supported 00:09:53.392 Memory Page Size Minimum: 4096 bytes 00:09:53.392 Memory Page Size Maximum: 65536 bytes 00:09:53.392 Persistent Memory Region: Not Supported 00:09:53.392 Optional Asynchronous Events Supported 00:09:53.392 Namespace Attribute Notices: Supported 00:09:53.392 Firmware Activation Notices: Not Supported 00:09:53.392 ANA Change Notices: Not Supported 00:09:53.392 PLE Aggregate Log Change Notices: Not Supported 00:09:53.392 LBA Status Info Alert Notices: Not Supported 00:09:53.392 EGE Aggregate Log Change Notices: Not Supported 00:09:53.392 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.392 Zone Descriptor Change Notices: Not Supported 00:09:53.392 Discovery Log Change Notices: Not Supported 00:09:53.392 Controller Attributes 00:09:53.392 128-bit Host Identifier: Not Supported 00:09:53.392 Non-Operational Permissive Mode: Not Supported 00:09:53.392 NVM Sets: Not Supported 00:09:53.392 Read Recovery Levels: Not Supported 00:09:53.392 Endurance Groups: Not Supported 00:09:53.392 Predictable Latency Mode: Not Supported 00:09:53.392 Traffic Based Keep ALive: Not Supported 00:09:53.392 Namespace Granularity: Not Supported 00:09:53.392 SQ Associations: Not Supported 00:09:53.392 UUID List: Not Supported 00:09:53.392 Multi-Domain Subsystem: Not Supported 00:09:53.392 Fixed Capacity Management: Not Supported 00:09:53.392 Variable Capacity Management: Not Supported 00:09:53.392 Delete Endurance Group: Not Supported 00:09:53.392 Delete NVM Set: Not Supported 00:09:53.392 Extended LBA Formats Supported: Supported 00:09:53.392 Flexible Data Placement Supported: Not Supported 00:09:53.392 00:09:53.392 Controller Memory Buffer Support 00:09:53.392 ================================ 00:09:53.392 Supported: No 00:09:53.392 00:09:53.392 Persistent Memory Region Support 00:09:53.392 ================================ 00:09:53.392 Supported: No 00:09:53.392 00:09:53.392 Admin Command Set Attributes 00:09:53.392 ============================ 00:09:53.392 Security Send/Receive: Not Supported 00:09:53.392 Format NVM: Supported 00:09:53.392 Firmware Activate/Download: Not Supported 00:09:53.393 Namespace Management: Supported 00:09:53.393 Device Self-Test: Not Supported 00:09:53.393 Directives: Supported 00:09:53.393 NVMe-MI: Not Supported 00:09:53.393 Virtualization Management: Not Supported 00:09:53.393 Doorbell Buffer Config: Supported 00:09:53.393 Get LBA Status Capability: Not Supported 00:09:53.393 Command & Feature Lockdown Capability: Not Supported 00:09:53.393 Abort Command Limit: 4 00:09:53.393 Async Event Request Limit: 4 00:09:53.393 Number of Firmware Slots: N/A 00:09:53.393 Firmware Slot 1 Read-Only: N/A 00:09:53.393 Firmware Activation Without Reset: N/A 00:09:53.393 Multiple Update Detection Support: N/A 00:09:53.393 Firmware Update Granularity: No Information Provided 00:09:53.393 Per-Namespace SMART Log: Yes 00:09:53.393 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.393 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:53.393 Command Effects Log Page: Supported 00:09:53.393 Get Log Page Extended Data: Supported 00:09:53.393 Telemetry Log Pages: Not Supported 00:09:53.393 Persistent Event Log Pages: Not Supported 00:09:53.393 Supported Log Pages Log Page: May Support 00:09:53.393 Commands Supported & Effects Log Page: Not Supported 00:09:53.393 Feature Identifiers & Effects Log Page:May Support 00:09:53.393 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.393 Data Area 4 for Telemetry Log: Not Supported 00:09:53.393 Error Log Page Entries Supported: 1 00:09:53.393 Keep Alive: Not Supported 00:09:53.393 00:09:53.393 NVM Command Set Attributes 00:09:53.393 ========================== 00:09:53.393 Submission Queue Entry Size 00:09:53.393 Max: 64 00:09:53.393 Min: 64 00:09:53.393 Completion Queue Entry Size 00:09:53.393 Max: 16 00:09:53.393 Min: 16 00:09:53.393 Number of Namespaces: 256 00:09:53.393 Compare Command: Supported 00:09:53.393 Write Uncorrectable Command: Not Supported 00:09:53.393 Dataset Management Command: Supported 00:09:53.393 Write Zeroes Command: Supported 00:09:53.393 Set Features Save Field: Supported 00:09:53.393 Reservations: Not Supported 00:09:53.393 Timestamp: Supported 00:09:53.393 Copy: Supported 00:09:53.393 Volatile Write Cache: Present 00:09:53.393 Atomic Write Unit (Normal): 1 00:09:53.393 Atomic Write Unit (PFail): 1 00:09:53.393 Atomic Compare & Write Unit: 1 00:09:53.393 Fused Compare & Write: Not Supported 00:09:53.393 Scatter-Gather List 00:09:53.393 SGL Command Set: Supported 00:09:53.393 SGL Keyed: Not Supported 00:09:53.393 SGL Bit Bucket Descriptor: Not Supported 00:09:53.393 SGL Metadata Pointer: Not Supported 00:09:53.393 Oversized SGL: Not Supported 00:09:53.393 SGL Metadata Address: Not Supported 00:09:53.393 SGL Offset: Not Supported 00:09:53.393 Transport SGL Data Block: Not Supported 00:09:53.393 Replay Protected Memory Block: Not Supported 00:09:53.393 00:09:53.393 Firmware Slot Information 00:09:53.393 ========================= 00:09:53.393 Active slot: 1 00:09:53.393 Slot 1 Firmware Revision: 1.0 00:09:53.393 00:09:53.393 00:09:53.393 Commands Supported and Effects 00:09:53.393 ============================== 00:09:53.393 Admin Commands 00:09:53.393 -------------- 00:09:53.393 Delete I/O Submission Queue (00h): Supported 00:09:53.393 Create I/O Submission Queue (01h): Supported 00:09:53.393 Get Log Page (02h): Supported 00:09:53.393 Delete I/O Completion Queue (04h): Supported 00:09:53.393 Create I/O Completion Queue (05h): Supported 00:09:53.393 Identify (06h): Supported 00:09:53.393 Abort (08h): Supported 00:09:53.393 Set Features (09h): Supported 00:09:53.393 Get Features (0Ah): Supported 00:09:53.393 Asynchronous Event Request (0Ch): Supported 00:09:53.393 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.393 Directive Send (19h): Supported 00:09:53.393 Directive Receive (1Ah): Supported 00:09:53.393 Virtualization Management (1Ch): Supported 00:09:53.393 Doorbell Buffer Config (7Ch): Supported 00:09:53.393 Format NVM (80h): Supported LBA-Change 00:09:53.393 I/O Commands 00:09:53.393 ------------ 00:09:53.393 Flush (00h): Supported LBA-Change 00:09:53.393 Write (01h): Supported LBA-Change 00:09:53.393 Read (02h): Supported 00:09:53.393 Compare (05h): Supported 00:09:53.393 Write Zeroes (08h): Supported LBA-Change 00:09:53.393 Dataset Management (09h): Supported LBA-Change 00:09:53.393 Unknown (0Ch): Supported 00:09:53.393 Unknown (12h): Supported 00:09:53.393 Copy (19h): Supported LBA-Change 00:09:53.393 Unknown (1Dh): Supported LBA-Change 00:09:53.393 00:09:53.393 Error Log 00:09:53.393 ========= 00:09:53.393 00:09:53.393 Arbitration 00:09:53.393 =========== 00:09:53.393 Arbitration Burst: no limit 00:09:53.393 00:09:53.393 Power Management 00:09:53.393 ================ 00:09:53.393 Number of Power States: 1 00:09:53.393 Current Power State: Power State #0 00:09:53.393 Power State #0: 00:09:53.393 Max Power: 25.00 W 00:09:53.393 Non-Operational State: Operational 00:09:53.393 Entry Latency: 16 microseconds 00:09:53.393 Exit Latency: 4 microseconds 00:09:53.393 Relative Read Throughput: 0 00:09:53.393 Relative Read Latency: 0 00:09:53.393 Relative Write Throughput: 0 00:09:53.393 Relative Write Latency: 0 00:09:53.393 Idle Power: Not Reported 00:09:53.393 Active Power: Not Reported 00:09:53.393 Non-Operational Permissive Mode: Not Supported 00:09:53.393 00:09:53.393 Health Information 00:09:53.393 ================== 00:09:53.393 Critical Warnings: 00:09:53.393 Available Spare Space: OK 00:09:53.393 Temperature: [2024-12-07 10:21:52.658006] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64089 terminated unexpected 00:09:53.393 OK 00:09:53.393 Device Reliability: OK 00:09:53.393 Read Only: No 00:09:53.393 Volatile Memory Backup: OK 00:09:53.393 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.393 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.393 Available Spare: 0% 00:09:53.393 Available Spare Threshold: 0% 00:09:53.393 Life Percentage Used: 0% 00:09:53.393 Data Units Read: 1170 00:09:53.393 Data Units Written: 1037 00:09:53.393 Host Read Commands: 49474 00:09:53.393 Host Write Commands: 48249 00:09:53.393 Controller Busy Time: 0 minutes 00:09:53.393 Power Cycles: 0 00:09:53.393 Power On Hours: 0 hours 00:09:53.393 Unsafe Shutdowns: 0 00:09:53.393 Unrecoverable Media Errors: 0 00:09:53.393 Lifetime Error Log Entries: 0 00:09:53.393 Warning Temperature Time: 0 minutes 00:09:53.393 Critical Temperature Time: 0 minutes 00:09:53.393 00:09:53.393 Number of Queues 00:09:53.393 ================ 00:09:53.393 Number of I/O Submission Queues: 64 00:09:53.393 Number of I/O Completion Queues: 64 00:09:53.393 00:09:53.393 ZNS Specific Controller Data 00:09:53.393 ============================ 00:09:53.393 Zone Append Size Limit: 0 00:09:53.393 00:09:53.393 00:09:53.393 Active Namespaces 00:09:53.393 ================= 00:09:53.393 Namespace ID:1 00:09:53.393 Error Recovery Timeout: Unlimited 00:09:53.393 Command Set Identifier: NVM (00h) 00:09:53.393 Deallocate: Supported 00:09:53.393 Deallocated/Unwritten Error: Supported 00:09:53.393 Deallocated Read Value: All 0x00 00:09:53.393 Deallocate in Write Zeroes: Not Supported 00:09:53.393 Deallocated Guard Field: 0xFFFF 00:09:53.393 Flush: Supported 00:09:53.393 Reservation: Not Supported 00:09:53.393 Namespace Sharing Capabilities: Private 00:09:53.393 Size (in LBAs): 1310720 (5GiB) 00:09:53.393 Capacity (in LBAs): 1310720 (5GiB) 00:09:53.393 Utilization (in LBAs): 1310720 (5GiB) 00:09:53.393 Thin Provisioning: Not Supported 00:09:53.393 Per-NS Atomic Units: No 00:09:53.393 Maximum Single Source Range Length: 128 00:09:53.393 Maximum Copy Length: 128 00:09:53.393 Maximum Source Range Count: 128 00:09:53.393 NGUID/EUI64 Never Reused: No 00:09:53.393 Namespace Write Protected: No 00:09:53.393 Number of LBA Formats: 8 00:09:53.393 Current LBA Format: LBA Format #04 00:09:53.393 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.393 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.393 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.393 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.393 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.393 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.393 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.393 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.393 00:09:53.393 NVM Specific Namespace Data 00:09:53.393 =========================== 00:09:53.393 Logical Block Storage Tag Mask: 0 00:09:53.393 Protection Information Capabilities: 00:09:53.393 16b Guard Protection Information Storage Tag Support: No 00:09:53.393 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.393 Storage Tag Check Read Support: No 00:09:53.393 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.394 ===================================================== 00:09:53.394 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.394 ===================================================== 00:09:53.394 Controller Capabilities/Features 00:09:53.394 ================================ 00:09:53.394 Vendor ID: 1b36 00:09:53.394 Subsystem Vendor ID: 1af4 00:09:53.394 Serial Number: 12343 00:09:53.394 Model Number: QEMU NVMe Ctrl 00:09:53.394 Firmware Version: 8.0.0 00:09:53.394 Recommended Arb Burst: 6 00:09:53.394 IEEE OUI Identifier: 00 54 52 00:09:53.394 Multi-path I/O 00:09:53.394 May have multiple subsystem ports: No 00:09:53.394 May have multiple controllers: Yes 00:09:53.394 Associated with SR-IOV VF: No 00:09:53.394 Max Data Transfer Size: 524288 00:09:53.394 Max Number of Namespaces: 256 00:09:53.394 Max Number of I/O Queues: 64 00:09:53.394 NVMe Specification Version (VS): 1.4 00:09:53.394 NVMe Specification Version (Identify): 1.4 00:09:53.394 Maximum Queue Entries: 2048 00:09:53.394 Contiguous Queues Required: Yes 00:09:53.394 Arbitration Mechanisms Supported 00:09:53.394 Weighted Round Robin: Not Supported 00:09:53.394 Vendor Specific: Not Supported 00:09:53.394 Reset Timeout: 7500 ms 00:09:53.394 Doorbell Stride: 4 bytes 00:09:53.394 NVM Subsystem Reset: Not Supported 00:09:53.394 Command Sets Supported 00:09:53.394 NVM Command Set: Supported 00:09:53.394 Boot Partition: Not Supported 00:09:53.394 Memory Page Size Minimum: 4096 bytes 00:09:53.394 Memory Page Size Maximum: 65536 bytes 00:09:53.394 Persistent Memory Region: Not Supported 00:09:53.394 Optional Asynchronous Events Supported 00:09:53.394 Namespace Attribute Notices: Supported 00:09:53.394 Firmware Activation Notices: Not Supported 00:09:53.394 ANA Change Notices: Not Supported 00:09:53.394 PLE Aggregate Log Change Notices: Not Supported 00:09:53.394 LBA Status Info Alert Notices: Not Supported 00:09:53.394 EGE Aggregate Log Change Notices: Not Supported 00:09:53.394 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.394 Zone Descriptor Change Notices: Not Supported 00:09:53.394 Discovery Log Change Notices: Not Supported 00:09:53.394 Controller Attributes 00:09:53.394 128-bit Host Identifier: Not Supported 00:09:53.394 Non-Operational Permissive Mode: Not Supported 00:09:53.394 NVM Sets: Not Supported 00:09:53.394 Read Recovery Levels: Not Supported 00:09:53.394 Endurance Groups: Supported 00:09:53.394 Predictable Latency Mode: Not Supported 00:09:53.394 Traffic Based Keep ALive: Not Supported 00:09:53.394 Namespace Granularity: Not Supported 00:09:53.394 SQ Associations: Not Supported 00:09:53.394 UUID List: Not Supported 00:09:53.394 Multi-Domain Subsystem: Not Supported 00:09:53.394 Fixed Capacity Management: Not Supported 00:09:53.394 Variable Capacity Management: Not Supported 00:09:53.394 Delete Endurance Group: Not Supported 00:09:53.394 Delete NVM Set: Not Supported 00:09:53.394 Extended LBA Formats Supported: Supported 00:09:53.394 Flexible Data Placement Supported: Supported 00:09:53.394 00:09:53.394 Controller Memory Buffer Support 00:09:53.394 ================================ 00:09:53.394 Supported: No 00:09:53.394 00:09:53.394 Persistent Memory Region Support 00:09:53.394 ================================ 00:09:53.394 Supported: No 00:09:53.394 00:09:53.394 Admin Command Set Attributes 00:09:53.394 ============================ 00:09:53.394 Security Send/Receive: Not Supported 00:09:53.394 Format NVM: Supported 00:09:53.394 Firmware Activate/Download: Not Supported 00:09:53.394 Namespace Management: Supported 00:09:53.394 Device Self-Test: Not Supported 00:09:53.394 Directives: Supported 00:09:53.394 NVMe-MI: Not Supported 00:09:53.394 Virtualization Management: Not Supported 00:09:53.394 Doorbell Buffer Config: Supported 00:09:53.394 Get LBA Status Capability: Not Supported 00:09:53.394 Command & Feature Lockdown Capability: Not Supported 00:09:53.394 Abort Command Limit: 4 00:09:53.394 Async Event Request Limit: 4 00:09:53.394 Number of Firmware Slots: N/A 00:09:53.394 Firmware Slot 1 Read-Only: N/A 00:09:53.394 Firmware Activation Without Reset: N/A 00:09:53.394 Multiple Update Detection Support: N/A 00:09:53.394 Firmware Update Granularity: No Information Provided 00:09:53.394 Per-Namespace SMART Log: Yes 00:09:53.394 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.394 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:53.394 Command Effects Log Page: Supported 00:09:53.394 Get Log Page Extended Data: Supported 00:09:53.394 Telemetry Log Pages: Not Supported 00:09:53.394 Persistent Event Log Pages: Not Supported 00:09:53.394 Supported Log Pages Log Page: May Support 00:09:53.394 Commands Supported & Effects Log Page: Not Supported 00:09:53.394 Feature Identifiers & Effects Log Page:May Support 00:09:53.394 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.394 Data Area 4 for Telemetry Log: Not Supported 00:09:53.394 Error Log Page Entries Supported: 1 00:09:53.394 Keep Alive: Not Supported 00:09:53.394 00:09:53.394 NVM Command Set Attributes 00:09:53.394 ========================== 00:09:53.394 Submission Queue Entry Size 00:09:53.394 Max: 64 00:09:53.394 Min: 64 00:09:53.394 Completion Queue Entry Size 00:09:53.394 Max: 16 00:09:53.394 Min: 16 00:09:53.394 Number of Namespaces: 256 00:09:53.394 Compare Command: Supported 00:09:53.394 Write Uncorrectable Command: Not Supported 00:09:53.394 Dataset Management Command: Supported 00:09:53.394 Write Zeroes Command: Supported 00:09:53.394 Set Features Save Field: Supported 00:09:53.394 Reservations: Not Supported 00:09:53.394 Timestamp: Supported 00:09:53.394 Copy: Supported 00:09:53.394 Volatile Write Cache: Present 00:09:53.394 Atomic Write Unit (Normal): 1 00:09:53.394 Atomic Write Unit (PFail): 1 00:09:53.394 Atomic Compare & Write Unit: 1 00:09:53.394 Fused Compare & Write: Not Supported 00:09:53.394 Scatter-Gather List 00:09:53.394 SGL Command Set: Supported 00:09:53.394 SGL Keyed: Not Supported 00:09:53.394 SGL Bit Bucket Descriptor: Not Supported 00:09:53.394 SGL Metadata Pointer: Not Supported 00:09:53.394 Oversized SGL: Not Supported 00:09:53.394 SGL Metadata Address: Not Supported 00:09:53.394 SGL Offset: Not Supported 00:09:53.394 Transport SGL Data Block: Not Supported 00:09:53.394 Replay Protected Memory Block: Not Supported 00:09:53.394 00:09:53.394 Firmware Slot Information 00:09:53.394 ========================= 00:09:53.394 Active slot: 1 00:09:53.394 Slot 1 Firmware Revision: 1.0 00:09:53.394 00:09:53.394 00:09:53.394 Commands Supported and Effects 00:09:53.394 ============================== 00:09:53.394 Admin Commands 00:09:53.394 -------------- 00:09:53.394 Delete I/O Submission Queue (00h): Supported 00:09:53.394 Create I/O Submission Queue (01h): Supported 00:09:53.394 Get Log Page (02h): Supported 00:09:53.394 Delete I/O Completion Queue (04h): Supported 00:09:53.394 Create I/O Completion Queue (05h): Supported 00:09:53.394 Identify (06h): Supported 00:09:53.394 Abort (08h): Supported 00:09:53.394 Set Features (09h): Supported 00:09:53.394 Get Features (0Ah): Supported 00:09:53.394 Asynchronous Event Request (0Ch): Supported 00:09:53.394 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.394 Directive Send (19h): Supported 00:09:53.394 Directive Receive (1Ah): Supported 00:09:53.394 Virtualization Management (1Ch): Supported 00:09:53.394 Doorbell Buffer Config (7Ch): Supported 00:09:53.394 Format NVM (80h): Supported LBA-Change 00:09:53.394 I/O Commands 00:09:53.394 ------------ 00:09:53.394 Flush (00h): Supported LBA-Change 00:09:53.394 Write (01h): Supported LBA-Change 00:09:53.394 Read (02h): Supported 00:09:53.394 Compare (05h): Supported 00:09:53.394 Write Zeroes (08h): Supported LBA-Change 00:09:53.394 Dataset Management (09h): Supported LBA-Change 00:09:53.394 Unknown (0Ch): Supported 00:09:53.394 Unknown (12h): Supported 00:09:53.394 Copy (19h): Supported LBA-Change 00:09:53.394 Unknown (1Dh): Supported LBA-Change 00:09:53.394 00:09:53.394 Error Log 00:09:53.394 ========= 00:09:53.394 00:09:53.394 Arbitration 00:09:53.394 =========== 00:09:53.394 Arbitration Burst: no limit 00:09:53.394 00:09:53.394 Power Management 00:09:53.395 ================ 00:09:53.395 Number of Power States: 1 00:09:53.395 Current Power State: Power State #0 00:09:53.395 Power State #0: 00:09:53.395 Max Power: 25.00 W 00:09:53.395 Non-Operational State: Operational 00:09:53.395 Entry Latency: 16 microseconds 00:09:53.395 Exit Latency: 4 microseconds 00:09:53.395 Relative Read Throughput: 0 00:09:53.395 Relative Read Latency: 0 00:09:53.395 Relative Write Throughput: 0 00:09:53.395 Relative Write Latency: 0 00:09:53.395 Idle Power: Not Reported 00:09:53.395 Active Power: Not Reported 00:09:53.395 Non-Operational Permissive Mode: Not Supported 00:09:53.395 00:09:53.395 Health Information 00:09:53.395 ================== 00:09:53.395 Critical Warnings: 00:09:53.395 Available Spare Space: OK 00:09:53.395 Temperature: OK 00:09:53.395 Device Reliability: OK 00:09:53.395 Read Only: No 00:09:53.395 Volatile Memory Backup: OK 00:09:53.395 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.395 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.395 Available Spare: 0% 00:09:53.395 Available Spare Threshold: 0% 00:09:53.395 Life Percentage Used: 0% 00:09:53.395 Data Units Read: 1098 00:09:53.395 Data Units Written: 1027 00:09:53.395 Host Read Commands: 37274 00:09:53.395 Host Write Commands: 36697 00:09:53.395 Controller Busy Time: 0 minutes 00:09:53.395 Power Cycles: 0 00:09:53.395 Power On Hours: 0 hours 00:09:53.395 Unsafe Shutdowns: 0 00:09:53.395 Unrecoverable Media Errors: 0 00:09:53.395 Lifetime Error Log Entries: 0 00:09:53.395 Warning Temperature Time: 0 minutes 00:09:53.395 Critical Temperature Time: 0 minutes 00:09:53.395 00:09:53.395 Number of Queues 00:09:53.395 ================ 00:09:53.395 Number of I/O Submission Queues: 64 00:09:53.395 Number of I/O Completion Queues: 64 00:09:53.395 00:09:53.395 ZNS Specific Controller Data 00:09:53.395 ============================ 00:09:53.395 Zone Append Size Limit: 0 00:09:53.395 00:09:53.395 00:09:53.395 Active Namespaces 00:09:53.395 ================= 00:09:53.395 Namespace ID:1 00:09:53.395 Error Recovery Timeout: Unlimited 00:09:53.395 Command Set Identifier: NVM (00h) 00:09:53.395 Deallocate: Supported 00:09:53.395 Deallocated/Unwritten Error: Supported 00:09:53.395 Deallocated Read Value: All 0x00 00:09:53.395 Deallocate in Write Zeroes: Not Supported 00:09:53.395 Deallocated Guard Field: 0xFFFF 00:09:53.395 Flush: Supported 00:09:53.395 Reservation: Not Supported 00:09:53.395 Namespace Sharing Capabilities: Multiple Controllers 00:09:53.395 Size (in LBAs): 262144 (1GiB) 00:09:53.395 Capacity (in LBAs): 262144 (1GiB) 00:09:53.395 Utilization (in LBAs): 262144 (1GiB) 00:09:53.395 Thin Provisioning: Not Supported 00:09:53.395 Per-NS Atomic Units: No 00:09:53.395 Maximum Single Source Range Length: 128 00:09:53.395 Maximum Copy Length: 128 00:09:53.395 Maximum Source Range Count: 128 00:09:53.395 NGUID/EUI64 Never Reused: No 00:09:53.395 Namespace Write Protected: No 00:09:53.395 Endurance group ID: 1 00:09:53.395 Number of LBA Formats: 8 00:09:53.395 Current LBA Format: LBA Format #04 00:09:53.395 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.395 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.395 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.395 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.395 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.395 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.395 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.395 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.395 00:09:53.395 Get Feature FDP: 00:09:53.395 ================ 00:09:53.395 Enabled: Yes 00:09:53.395 FDP configuration index: 0 00:09:53.395 00:09:53.395 FDP configurations log page 00:09:53.395 =========================== 00:09:53.395 Number of FDP configurations: 1 00:09:53.395 Version: 0 00:09:53.395 Size: 112 00:09:53.395 FDP Configuration Descriptor: 0 00:09:53.395 Descriptor Size: 96 00:09:53.395 Reclaim Group Identifier format: 2 00:09:53.395 FDP Volatile Write Cache: Not Present 00:09:53.395 FDP Configuration: Valid 00:09:53.395 Vendor Specific Size: 0 00:09:53.395 Number of Reclaim Groups: 2 00:09:53.395 Number of Recalim Unit Handles: 8 00:09:53.395 Max Placement Identifiers: 128 00:09:53.395 Number of Namespaces Suppprted: 256 00:09:53.395 Reclaim unit Nominal Size: 6000000 bytes 00:09:53.395 Estimated Reclaim Unit Time Limit: Not Reported 00:09:53.395 RUH Desc #000: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #001: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #002: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #003: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #004: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #005: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #006: RUH Type: Initially Isolated 00:09:53.395 RUH Desc #007: RUH Type: Initially Isolated 00:09:53.395 00:09:53.395 FDP reclaim unit handle usage log page 00:09:53.395 ====================================== 00:09:53.395 Number of Reclaim Unit Handles: 8 00:09:53.395 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:53.395 RUH Usage Desc #001: RUH Attributes: Unused 00:09:53.395 RUH Usage Desc #002: RUH Attributes: Unused 00:09:53.395 RUH Usage Desc #003: RUH Attributes: Unused 00:09:53.395 RUH Usage Desc #004: RUH Attributes: Unused 00:09:53.395 RUH Usage Desc #005: RUH Attributes: Unused 00:09:53.395 RUH Usage Desc #006: RUH Attributes: Unused 00:09:53.395 RUH Usage Desc #007: RUH Attributes: Unused 00:09:53.395 00:09:53.395 FDP statistics log page 00:09:53.395 ======================= 00:09:53.395 Host bytes with metadata written: 637968384 00:09:53.395 M[2024-12-07 10:21:52.659726] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64089 terminated unexpected 00:09:53.395 edia bytes with metadata written: 638029824 00:09:53.395 Media bytes erased: 0 00:09:53.395 00:09:53.395 FDP events log page 00:09:53.395 =================== 00:09:53.395 Number of FDP events: 0 00:09:53.395 00:09:53.395 NVM Specific Namespace Data 00:09:53.395 =========================== 00:09:53.395 Logical Block Storage Tag Mask: 0 00:09:53.395 Protection Information Capabilities: 00:09:53.395 16b Guard Protection Information Storage Tag Support: No 00:09:53.395 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.395 Storage Tag Check Read Support: No 00:09:53.395 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.395 ===================================================== 00:09:53.395 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.395 ===================================================== 00:09:53.395 Controller Capabilities/Features 00:09:53.395 ================================ 00:09:53.395 Vendor ID: 1b36 00:09:53.395 Subsystem Vendor ID: 1af4 00:09:53.395 Serial Number: 12342 00:09:53.395 Model Number: QEMU NVMe Ctrl 00:09:53.395 Firmware Version: 8.0.0 00:09:53.395 Recommended Arb Burst: 6 00:09:53.395 IEEE OUI Identifier: 00 54 52 00:09:53.395 Multi-path I/O 00:09:53.395 May have multiple subsystem ports: No 00:09:53.395 May have multiple controllers: No 00:09:53.395 Associated with SR-IOV VF: No 00:09:53.395 Max Data Transfer Size: 524288 00:09:53.395 Max Number of Namespaces: 256 00:09:53.395 Max Number of I/O Queues: 64 00:09:53.395 NVMe Specification Version (VS): 1.4 00:09:53.395 NVMe Specification Version (Identify): 1.4 00:09:53.395 Maximum Queue Entries: 2048 00:09:53.395 Contiguous Queues Required: Yes 00:09:53.395 Arbitration Mechanisms Supported 00:09:53.395 Weighted Round Robin: Not Supported 00:09:53.395 Vendor Specific: Not Supported 00:09:53.395 Reset Timeout: 7500 ms 00:09:53.395 Doorbell Stride: 4 bytes 00:09:53.396 NVM Subsystem Reset: Not Supported 00:09:53.396 Command Sets Supported 00:09:53.396 NVM Command Set: Supported 00:09:53.396 Boot Partition: Not Supported 00:09:53.396 Memory Page Size Minimum: 4096 bytes 00:09:53.396 Memory Page Size Maximum: 65536 bytes 00:09:53.396 Persistent Memory Region: Not Supported 00:09:53.396 Optional Asynchronous Events Supported 00:09:53.396 Namespace Attribute Notices: Supported 00:09:53.396 Firmware Activation Notices: Not Supported 00:09:53.396 ANA Change Notices: Not Supported 00:09:53.396 PLE Aggregate Log Change Notices: Not Supported 00:09:53.396 LBA Status Info Alert Notices: Not Supported 00:09:53.396 EGE Aggregate Log Change Notices: Not Supported 00:09:53.396 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.396 Zone Descriptor Change Notices: Not Supported 00:09:53.396 Discovery Log Change Notices: Not Supported 00:09:53.396 Controller Attributes 00:09:53.396 128-bit Host Identifier: Not Supported 00:09:53.396 Non-Operational Permissive Mode: Not Supported 00:09:53.396 NVM Sets: Not Supported 00:09:53.396 Read Recovery Levels: Not Supported 00:09:53.396 Endurance Groups: Not Supported 00:09:53.396 Predictable Latency Mode: Not Supported 00:09:53.396 Traffic Based Keep ALive: Not Supported 00:09:53.396 Namespace Granularity: Not Supported 00:09:53.396 SQ Associations: Not Supported 00:09:53.396 UUID List: Not Supported 00:09:53.396 Multi-Domain Subsystem: Not Supported 00:09:53.396 Fixed Capacity Management: Not Supported 00:09:53.396 Variable Capacity Management: Not Supported 00:09:53.396 Delete Endurance Group: Not Supported 00:09:53.396 Delete NVM Set: Not Supported 00:09:53.396 Extended LBA Formats Supported: Supported 00:09:53.396 Flexible Data Placement Supported: Not Supported 00:09:53.396 00:09:53.396 Controller Memory Buffer Support 00:09:53.396 ================================ 00:09:53.396 Supported: No 00:09:53.396 00:09:53.396 Persistent Memory Region Support 00:09:53.396 ================================ 00:09:53.396 Supported: No 00:09:53.396 00:09:53.396 Admin Command Set Attributes 00:09:53.396 ============================ 00:09:53.396 Security Send/Receive: Not Supported 00:09:53.396 Format NVM: Supported 00:09:53.396 Firmware Activate/Download: Not Supported 00:09:53.396 Namespace Management: Supported 00:09:53.396 Device Self-Test: Not Supported 00:09:53.396 Directives: Supported 00:09:53.396 NVMe-MI: Not Supported 00:09:53.396 Virtualization Management: Not Supported 00:09:53.396 Doorbell Buffer Config: Supported 00:09:53.396 Get LBA Status Capability: Not Supported 00:09:53.396 Command & Feature Lockdown Capability: Not Supported 00:09:53.396 Abort Command Limit: 4 00:09:53.396 Async Event Request Limit: 4 00:09:53.396 Number of Firmware Slots: N/A 00:09:53.396 Firmware Slot 1 Read-Only: N/A 00:09:53.396 Firmware Activation Without Reset: N/A 00:09:53.396 Multiple Update Detection Support: N/A 00:09:53.396 Firmware Update Granularity: No Information Provided 00:09:53.396 Per-Namespace SMART Log: Yes 00:09:53.396 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.396 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:53.396 Command Effects Log Page: Supported 00:09:53.396 Get Log Page Extended Data: Supported 00:09:53.396 Telemetry Log Pages: Not Supported 00:09:53.396 Persistent Event Log Pages: Not Supported 00:09:53.396 Supported Log Pages Log Page: May Support 00:09:53.396 Commands Supported & Effects Log Page: Not Supported 00:09:53.396 Feature Identifiers & Effects Log Page:May Support 00:09:53.396 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.396 Data Area 4 for Telemetry Log: Not Supported 00:09:53.396 Error Log Page Entries Supported: 1 00:09:53.396 Keep Alive: Not Supported 00:09:53.396 00:09:53.396 NVM Command Set Attributes 00:09:53.396 ========================== 00:09:53.396 Submission Queue Entry Size 00:09:53.396 Max: 64 00:09:53.396 Min: 64 00:09:53.396 Completion Queue Entry Size 00:09:53.396 Max: 16 00:09:53.396 Min: 16 00:09:53.396 Number of Namespaces: 256 00:09:53.396 Compare Command: Supported 00:09:53.396 Write Uncorrectable Command: Not Supported 00:09:53.396 Dataset Management Command: Supported 00:09:53.396 Write Zeroes Command: Supported 00:09:53.396 Set Features Save Field: Supported 00:09:53.396 Reservations: Not Supported 00:09:53.396 Timestamp: Supported 00:09:53.396 Copy: Supported 00:09:53.396 Volatile Write Cache: Present 00:09:53.396 Atomic Write Unit (Normal): 1 00:09:53.396 Atomic Write Unit (PFail): 1 00:09:53.396 Atomic Compare & Write Unit: 1 00:09:53.396 Fused Compare & Write: Not Supported 00:09:53.396 Scatter-Gather List 00:09:53.396 SGL Command Set: Supported 00:09:53.396 SGL Keyed: Not Supported 00:09:53.396 SGL Bit Bucket Descriptor: Not Supported 00:09:53.396 SGL Metadata Pointer: Not Supported 00:09:53.396 Oversized SGL: Not Supported 00:09:53.396 SGL Metadata Address: Not Supported 00:09:53.396 SGL Offset: Not Supported 00:09:53.396 Transport SGL Data Block: Not Supported 00:09:53.396 Replay Protected Memory Block: Not Supported 00:09:53.396 00:09:53.396 Firmware Slot Information 00:09:53.396 ========================= 00:09:53.396 Active slot: 1 00:09:53.396 Slot 1 Firmware Revision: 1.0 00:09:53.396 00:09:53.396 00:09:53.396 Commands Supported and Effects 00:09:53.396 ============================== 00:09:53.396 Admin Commands 00:09:53.396 -------------- 00:09:53.396 Delete I/O Submission Queue (00h): Supported 00:09:53.396 Create I/O Submission Queue (01h): Supported 00:09:53.396 Get Log Page (02h): Supported 00:09:53.396 Delete I/O Completion Queue (04h): Supported 00:09:53.396 Create I/O Completion Queue (05h): Supported 00:09:53.396 Identify (06h): Supported 00:09:53.396 Abort (08h): Supported 00:09:53.396 Set Features (09h): Supported 00:09:53.396 Get Features (0Ah): Supported 00:09:53.396 Asynchronous Event Request (0Ch): Supported 00:09:53.396 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.396 Directive Send (19h): Supported 00:09:53.396 Directive Receive (1Ah): Supported 00:09:53.396 Virtualization Management (1Ch): Supported 00:09:53.396 Doorbell Buffer Config (7Ch): Supported 00:09:53.396 Format NVM (80h): Supported LBA-Change 00:09:53.396 I/O Commands 00:09:53.396 ------------ 00:09:53.396 Flush (00h): Supported LBA-Change 00:09:53.396 Write (01h): Supported LBA-Change 00:09:53.396 Read (02h): Supported 00:09:53.396 Compare (05h): Supported 00:09:53.396 Write Zeroes (08h): Supported LBA-Change 00:09:53.396 Dataset Management (09h): Supported LBA-Change 00:09:53.396 Unknown (0Ch): Supported 00:09:53.396 Unknown (12h): Supported 00:09:53.396 Copy (19h): Supported LBA-Change 00:09:53.396 Unknown (1Dh): Supported LBA-Change 00:09:53.396 00:09:53.396 Error Log 00:09:53.396 ========= 00:09:53.396 00:09:53.396 Arbitration 00:09:53.396 =========== 00:09:53.396 Arbitration Burst: no limit 00:09:53.396 00:09:53.396 Power Management 00:09:53.396 ================ 00:09:53.396 Number of Power States: 1 00:09:53.396 Current Power State: Power State #0 00:09:53.396 Power State #0: 00:09:53.396 Max Power: 25.00 W 00:09:53.396 Non-Operational State: Operational 00:09:53.396 Entry Latency: 16 microseconds 00:09:53.396 Exit Latency: 4 microseconds 00:09:53.396 Relative Read Throughput: 0 00:09:53.396 Relative Read Latency: 0 00:09:53.396 Relative Write Throughput: 0 00:09:53.396 Relative Write Latency: 0 00:09:53.396 Idle Power: Not Reported 00:09:53.396 Active Power: Not Reported 00:09:53.396 Non-Operational Permissive Mode: Not Supported 00:09:53.396 00:09:53.396 Health Information 00:09:53.396 ================== 00:09:53.396 Critical Warnings: 00:09:53.396 Available Spare Space: OK 00:09:53.396 Temperature: OK 00:09:53.396 Device Reliability: OK 00:09:53.396 Read Only: No 00:09:53.396 Volatile Memory Backup: OK 00:09:53.396 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.396 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.396 Available Spare: 0% 00:09:53.397 Available Spare Threshold: 0% 00:09:53.397 Life Percentage Used: 0% 00:09:53.397 Data Units Read: 2593 00:09:53.397 Data Units Written: 2380 00:09:53.397 Host Read Commands: 106294 00:09:53.397 Host Write Commands: 104563 00:09:53.397 Controller Busy Time: 0 minutes 00:09:53.397 Power Cycles: 0 00:09:53.397 Power On Hours: 0 hours 00:09:53.397 Unsafe Shutdowns: 0 00:09:53.397 Unrecoverable Media Errors: 0 00:09:53.397 Lifetime Error Log Entries: 0 00:09:53.397 Warning Temperature Time: 0 minutes 00:09:53.397 Critical Temperature Time: 0 minutes 00:09:53.397 00:09:53.397 Number of Queues 00:09:53.397 ================ 00:09:53.397 Number of I/O Submission Queues: 64 00:09:53.397 Number of I/O Completion Queues: 64 00:09:53.397 00:09:53.397 ZNS Specific Controller Data 00:09:53.397 ============================ 00:09:53.397 Zone Append Size Limit: 0 00:09:53.397 00:09:53.397 00:09:53.397 Active Namespaces 00:09:53.397 ================= 00:09:53.397 Namespace ID:1 00:09:53.397 Error Recovery Timeout: Unlimited 00:09:53.397 Command Set Identifier: NVM (00h) 00:09:53.397 Deallocate: Supported 00:09:53.397 Deallocated/Unwritten Error: Supported 00:09:53.397 Deallocated Read Value: All 0x00 00:09:53.397 Deallocate in Write Zeroes: Not Supported 00:09:53.397 Deallocated Guard Field: 0xFFFF 00:09:53.397 Flush: Supported 00:09:53.397 Reservation: Not Supported 00:09:53.397 Namespace Sharing Capabilities: Private 00:09:53.397 Size (in LBAs): 1048576 (4GiB) 00:09:53.397 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.397 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.397 Thin Provisioning: Not Supported 00:09:53.397 Per-NS Atomic Units: No 00:09:53.397 Maximum Single Source Range Length: 128 00:09:53.397 Maximum Copy Length: 128 00:09:53.397 Maximum Source Range Count: 128 00:09:53.397 NGUID/EUI64 Never Reused: No 00:09:53.397 Namespace Write Protected: No 00:09:53.397 Number of LBA Formats: 8 00:09:53.397 Current LBA Format: LBA Format #04 00:09:53.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.397 00:09:53.397 NVM Specific Namespace Data 00:09:53.397 =========================== 00:09:53.397 Logical Block Storage Tag Mask: 0 00:09:53.397 Protection Information Capabilities: 00:09:53.397 16b Guard Protection Information Storage Tag Support: No 00:09:53.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.397 Storage Tag Check Read Support: No 00:09:53.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Namespace ID:2 00:09:53.397 Error Recovery Timeout: Unlimited 00:09:53.397 Command Set Identifier: NVM (00h) 00:09:53.397 Deallocate: Supported 00:09:53.397 Deallocated/Unwritten Error: Supported 00:09:53.397 Deallocated Read Value: All 0x00 00:09:53.397 Deallocate in Write Zeroes: Not Supported 00:09:53.397 Deallocated Guard Field: 0xFFFF 00:09:53.397 Flush: Supported 00:09:53.397 Reservation: Not Supported 00:09:53.397 Namespace Sharing Capabilities: Private 00:09:53.397 Size (in LBAs): 1048576 (4GiB) 00:09:53.397 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.397 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.397 Thin Provisioning: Not Supported 00:09:53.397 Per-NS Atomic Units: No 00:09:53.397 Maximum Single Source Range Length: 128 00:09:53.397 Maximum Copy Length: 128 00:09:53.397 Maximum Source Range Count: 128 00:09:53.397 NGUID/EUI64 Never Reused: No 00:09:53.397 Namespace Write Protected: No 00:09:53.397 Number of LBA Formats: 8 00:09:53.397 Current LBA Format: LBA Format #04 00:09:53.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.397 00:09:53.397 NVM Specific Namespace Data 00:09:53.397 =========================== 00:09:53.397 Logical Block Storage Tag Mask: 0 00:09:53.397 Protection Information Capabilities: 00:09:53.397 16b Guard Protection Information Storage Tag Support: No 00:09:53.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.397 Storage Tag Check Read Support: No 00:09:53.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Namespace ID:3 00:09:53.397 Error Recovery Timeout: Unlimited 00:09:53.397 Command Set Identifier: NVM (00h) 00:09:53.397 Deallocate: Supported 00:09:53.397 Deallocated/Unwritten Error: Supported 00:09:53.397 Deallocated Read Value: All 0x00 00:09:53.397 Deallocate in Write Zeroes: Not Supported 00:09:53.397 Deallocated Guard Field: 0xFFFF 00:09:53.397 Flush: Supported 00:09:53.397 Reservation: Not Supported 00:09:53.397 Namespace Sharing Capabilities: Private 00:09:53.397 Size (in LBAs): 1048576 (4GiB) 00:09:53.397 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.397 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.397 Thin Provisioning: Not Supported 00:09:53.397 Per-NS Atomic Units: No 00:09:53.397 Maximum Single Source Range Length: 128 00:09:53.397 Maximum Copy Length: 128 00:09:53.397 Maximum Source Range Count: 128 00:09:53.397 NGUID/EUI64 Never Reused: No 00:09:53.397 Namespace Write Protected: No 00:09:53.397 Number of LBA Formats: 8 00:09:53.397 Current LBA Format: LBA Format #04 00:09:53.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.397 00:09:53.397 NVM Specific Namespace Data 00:09:53.397 =========================== 00:09:53.397 Logical Block Storage Tag Mask: 0 00:09:53.397 Protection Information Capabilities: 00:09:53.397 16b Guard Protection Information Storage Tag Support: No 00:09:53.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.397 Storage Tag Check Read Support: No 00:09:53.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.397 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:53.397 10:21:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:53.659 ===================================================== 00:09:53.659 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.659 ===================================================== 00:09:53.659 Controller Capabilities/Features 00:09:53.659 ================================ 00:09:53.659 Vendor ID: 1b36 00:09:53.659 Subsystem Vendor ID: 1af4 00:09:53.659 Serial Number: 12340 00:09:53.659 Model Number: QEMU NVMe Ctrl 00:09:53.659 Firmware Version: 8.0.0 00:09:53.659 Recommended Arb Burst: 6 00:09:53.659 IEEE OUI Identifier: 00 54 52 00:09:53.659 Multi-path I/O 00:09:53.659 May have multiple subsystem ports: No 00:09:53.659 May have multiple controllers: No 00:09:53.659 Associated with SR-IOV VF: No 00:09:53.659 Max Data Transfer Size: 524288 00:09:53.659 Max Number of Namespaces: 256 00:09:53.659 Max Number of I/O Queues: 64 00:09:53.659 NVMe Specification Version (VS): 1.4 00:09:53.659 NVMe Specification Version (Identify): 1.4 00:09:53.659 Maximum Queue Entries: 2048 00:09:53.659 Contiguous Queues Required: Yes 00:09:53.659 Arbitration Mechanisms Supported 00:09:53.659 Weighted Round Robin: Not Supported 00:09:53.659 Vendor Specific: Not Supported 00:09:53.659 Reset Timeout: 7500 ms 00:09:53.659 Doorbell Stride: 4 bytes 00:09:53.659 NVM Subsystem Reset: Not Supported 00:09:53.659 Command Sets Supported 00:09:53.659 NVM Command Set: Supported 00:09:53.659 Boot Partition: Not Supported 00:09:53.659 Memory Page Size Minimum: 4096 bytes 00:09:53.659 Memory Page Size Maximum: 65536 bytes 00:09:53.659 Persistent Memory Region: Not Supported 00:09:53.659 Optional Asynchronous Events Supported 00:09:53.659 Namespace Attribute Notices: Supported 00:09:53.659 Firmware Activation Notices: Not Supported 00:09:53.659 ANA Change Notices: Not Supported 00:09:53.659 PLE Aggregate Log Change Notices: Not Supported 00:09:53.659 LBA Status Info Alert Notices: Not Supported 00:09:53.659 EGE Aggregate Log Change Notices: Not Supported 00:09:53.659 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.659 Zone Descriptor Change Notices: Not Supported 00:09:53.659 Discovery Log Change Notices: Not Supported 00:09:53.659 Controller Attributes 00:09:53.659 128-bit Host Identifier: Not Supported 00:09:53.659 Non-Operational Permissive Mode: Not Supported 00:09:53.659 NVM Sets: Not Supported 00:09:53.659 Read Recovery Levels: Not Supported 00:09:53.659 Endurance Groups: Not Supported 00:09:53.659 Predictable Latency Mode: Not Supported 00:09:53.659 Traffic Based Keep ALive: Not Supported 00:09:53.659 Namespace Granularity: Not Supported 00:09:53.659 SQ Associations: Not Supported 00:09:53.659 UUID List: Not Supported 00:09:53.659 Multi-Domain Subsystem: Not Supported 00:09:53.659 Fixed Capacity Management: Not Supported 00:09:53.659 Variable Capacity Management: Not Supported 00:09:53.659 Delete Endurance Group: Not Supported 00:09:53.659 Delete NVM Set: Not Supported 00:09:53.659 Extended LBA Formats Supported: Supported 00:09:53.659 Flexible Data Placement Supported: Not Supported 00:09:53.659 00:09:53.659 Controller Memory Buffer Support 00:09:53.659 ================================ 00:09:53.659 Supported: No 00:09:53.659 00:09:53.659 Persistent Memory Region Support 00:09:53.659 ================================ 00:09:53.659 Supported: No 00:09:53.659 00:09:53.659 Admin Command Set Attributes 00:09:53.659 ============================ 00:09:53.659 Security Send/Receive: Not Supported 00:09:53.659 Format NVM: Supported 00:09:53.659 Firmware Activate/Download: Not Supported 00:09:53.659 Namespace Management: Supported 00:09:53.659 Device Self-Test: Not Supported 00:09:53.659 Directives: Supported 00:09:53.659 NVMe-MI: Not Supported 00:09:53.659 Virtualization Management: Not Supported 00:09:53.659 Doorbell Buffer Config: Supported 00:09:53.659 Get LBA Status Capability: Not Supported 00:09:53.659 Command & Feature Lockdown Capability: Not Supported 00:09:53.659 Abort Command Limit: 4 00:09:53.659 Async Event Request Limit: 4 00:09:53.659 Number of Firmware Slots: N/A 00:09:53.659 Firmware Slot 1 Read-Only: N/A 00:09:53.659 Firmware Activation Without Reset: N/A 00:09:53.659 Multiple Update Detection Support: N/A 00:09:53.659 Firmware Update Granularity: No Information Provided 00:09:53.659 Per-Namespace SMART Log: Yes 00:09:53.659 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.659 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:53.659 Command Effects Log Page: Supported 00:09:53.659 Get Log Page Extended Data: Supported 00:09:53.659 Telemetry Log Pages: Not Supported 00:09:53.659 Persistent Event Log Pages: Not Supported 00:09:53.659 Supported Log Pages Log Page: May Support 00:09:53.659 Commands Supported & Effects Log Page: Not Supported 00:09:53.659 Feature Identifiers & Effects Log Page:May Support 00:09:53.659 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.659 Data Area 4 for Telemetry Log: Not Supported 00:09:53.659 Error Log Page Entries Supported: 1 00:09:53.659 Keep Alive: Not Supported 00:09:53.659 00:09:53.659 NVM Command Set Attributes 00:09:53.659 ========================== 00:09:53.660 Submission Queue Entry Size 00:09:53.660 Max: 64 00:09:53.660 Min: 64 00:09:53.660 Completion Queue Entry Size 00:09:53.660 Max: 16 00:09:53.660 Min: 16 00:09:53.660 Number of Namespaces: 256 00:09:53.660 Compare Command: Supported 00:09:53.660 Write Uncorrectable Command: Not Supported 00:09:53.660 Dataset Management Command: Supported 00:09:53.660 Write Zeroes Command: Supported 00:09:53.660 Set Features Save Field: Supported 00:09:53.660 Reservations: Not Supported 00:09:53.660 Timestamp: Supported 00:09:53.660 Copy: Supported 00:09:53.660 Volatile Write Cache: Present 00:09:53.660 Atomic Write Unit (Normal): 1 00:09:53.660 Atomic Write Unit (PFail): 1 00:09:53.660 Atomic Compare & Write Unit: 1 00:09:53.660 Fused Compare & Write: Not Supported 00:09:53.660 Scatter-Gather List 00:09:53.660 SGL Command Set: Supported 00:09:53.660 SGL Keyed: Not Supported 00:09:53.660 SGL Bit Bucket Descriptor: Not Supported 00:09:53.660 SGL Metadata Pointer: Not Supported 00:09:53.660 Oversized SGL: Not Supported 00:09:53.660 SGL Metadata Address: Not Supported 00:09:53.660 SGL Offset: Not Supported 00:09:53.660 Transport SGL Data Block: Not Supported 00:09:53.660 Replay Protected Memory Block: Not Supported 00:09:53.660 00:09:53.660 Firmware Slot Information 00:09:53.660 ========================= 00:09:53.660 Active slot: 1 00:09:53.660 Slot 1 Firmware Revision: 1.0 00:09:53.660 00:09:53.660 00:09:53.660 Commands Supported and Effects 00:09:53.660 ============================== 00:09:53.660 Admin Commands 00:09:53.660 -------------- 00:09:53.660 Delete I/O Submission Queue (00h): Supported 00:09:53.660 Create I/O Submission Queue (01h): Supported 00:09:53.660 Get Log Page (02h): Supported 00:09:53.660 Delete I/O Completion Queue (04h): Supported 00:09:53.660 Create I/O Completion Queue (05h): Supported 00:09:53.660 Identify (06h): Supported 00:09:53.660 Abort (08h): Supported 00:09:53.660 Set Features (09h): Supported 00:09:53.660 Get Features (0Ah): Supported 00:09:53.660 Asynchronous Event Request (0Ch): Supported 00:09:53.660 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.660 Directive Send (19h): Supported 00:09:53.660 Directive Receive (1Ah): Supported 00:09:53.660 Virtualization Management (1Ch): Supported 00:09:53.660 Doorbell Buffer Config (7Ch): Supported 00:09:53.660 Format NVM (80h): Supported LBA-Change 00:09:53.660 I/O Commands 00:09:53.660 ------------ 00:09:53.660 Flush (00h): Supported LBA-Change 00:09:53.660 Write (01h): Supported LBA-Change 00:09:53.660 Read (02h): Supported 00:09:53.660 Compare (05h): Supported 00:09:53.660 Write Zeroes (08h): Supported LBA-Change 00:09:53.660 Dataset Management (09h): Supported LBA-Change 00:09:53.660 Unknown (0Ch): Supported 00:09:53.660 Unknown (12h): Supported 00:09:53.660 Copy (19h): Supported LBA-Change 00:09:53.660 Unknown (1Dh): Supported LBA-Change 00:09:53.660 00:09:53.660 Error Log 00:09:53.660 ========= 00:09:53.660 00:09:53.660 Arbitration 00:09:53.660 =========== 00:09:53.660 Arbitration Burst: no limit 00:09:53.660 00:09:53.660 Power Management 00:09:53.660 ================ 00:09:53.660 Number of Power States: 1 00:09:53.660 Current Power State: Power State #0 00:09:53.660 Power State #0: 00:09:53.660 Max Power: 25.00 W 00:09:53.660 Non-Operational State: Operational 00:09:53.660 Entry Latency: 16 microseconds 00:09:53.660 Exit Latency: 4 microseconds 00:09:53.660 Relative Read Throughput: 0 00:09:53.660 Relative Read Latency: 0 00:09:53.660 Relative Write Throughput: 0 00:09:53.660 Relative Write Latency: 0 00:09:53.660 Idle Power: Not Reported 00:09:53.660 Active Power: Not Reported 00:09:53.660 Non-Operational Permissive Mode: Not Supported 00:09:53.660 00:09:53.660 Health Information 00:09:53.660 ================== 00:09:53.660 Critical Warnings: 00:09:53.660 Available Spare Space: OK 00:09:53.660 Temperature: OK 00:09:53.660 Device Reliability: OK 00:09:53.660 Read Only: No 00:09:53.660 Volatile Memory Backup: OK 00:09:53.660 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.660 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.660 Available Spare: 0% 00:09:53.660 Available Spare Threshold: 0% 00:09:53.660 Life Percentage Used: 0% 00:09:53.660 Data Units Read: 762 00:09:53.660 Data Units Written: 690 00:09:53.660 Host Read Commands: 34353 00:09:53.660 Host Write Commands: 34139 00:09:53.660 Controller Busy Time: 0 minutes 00:09:53.660 Power Cycles: 0 00:09:53.660 Power On Hours: 0 hours 00:09:53.660 Unsafe Shutdowns: 0 00:09:53.660 Unrecoverable Media Errors: 0 00:09:53.660 Lifetime Error Log Entries: 0 00:09:53.660 Warning Temperature Time: 0 minutes 00:09:53.660 Critical Temperature Time: 0 minutes 00:09:53.660 00:09:53.660 Number of Queues 00:09:53.660 ================ 00:09:53.660 Number of I/O Submission Queues: 64 00:09:53.660 Number of I/O Completion Queues: 64 00:09:53.660 00:09:53.660 ZNS Specific Controller Data 00:09:53.660 ============================ 00:09:53.660 Zone Append Size Limit: 0 00:09:53.660 00:09:53.660 00:09:53.660 Active Namespaces 00:09:53.660 ================= 00:09:53.660 Namespace ID:1 00:09:53.660 Error Recovery Timeout: Unlimited 00:09:53.660 Command Set Identifier: NVM (00h) 00:09:53.660 Deallocate: Supported 00:09:53.660 Deallocated/Unwritten Error: Supported 00:09:53.660 Deallocated Read Value: All 0x00 00:09:53.660 Deallocate in Write Zeroes: Not Supported 00:09:53.660 Deallocated Guard Field: 0xFFFF 00:09:53.660 Flush: Supported 00:09:53.660 Reservation: Not Supported 00:09:53.660 Metadata Transferred as: Separate Metadata Buffer 00:09:53.660 Namespace Sharing Capabilities: Private 00:09:53.660 Size (in LBAs): 1548666 (5GiB) 00:09:53.660 Capacity (in LBAs): 1548666 (5GiB) 00:09:53.660 Utilization (in LBAs): 1548666 (5GiB) 00:09:53.660 Thin Provisioning: Not Supported 00:09:53.660 Per-NS Atomic Units: No 00:09:53.660 Maximum Single Source Range Length: 128 00:09:53.660 Maximum Copy Length: 128 00:09:53.661 Maximum Source Range Count: 128 00:09:53.661 NGUID/EUI64 Never Reused: No 00:09:53.661 Namespace Write Protected: No 00:09:53.661 Number of LBA Formats: 8 00:09:53.661 Current LBA Format: LBA Format #07 00:09:53.661 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.661 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.661 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.661 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.661 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.661 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.661 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.661 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.661 00:09:53.661 NVM Specific Namespace Data 00:09:53.661 =========================== 00:09:53.661 Logical Block Storage Tag Mask: 0 00:09:53.661 Protection Information Capabilities: 00:09:53.661 16b Guard Protection Information Storage Tag Support: No 00:09:53.661 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.661 Storage Tag Check Read Support: No 00:09:53.661 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.661 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.920 10:21:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:53.920 10:21:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:54.181 ===================================================== 00:09:54.181 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.181 ===================================================== 00:09:54.181 Controller Capabilities/Features 00:09:54.181 ================================ 00:09:54.181 Vendor ID: 1b36 00:09:54.181 Subsystem Vendor ID: 1af4 00:09:54.181 Serial Number: 12341 00:09:54.181 Model Number: QEMU NVMe Ctrl 00:09:54.181 Firmware Version: 8.0.0 00:09:54.181 Recommended Arb Burst: 6 00:09:54.181 IEEE OUI Identifier: 00 54 52 00:09:54.181 Multi-path I/O 00:09:54.181 May have multiple subsystem ports: No 00:09:54.181 May have multiple controllers: No 00:09:54.181 Associated with SR-IOV VF: No 00:09:54.181 Max Data Transfer Size: 524288 00:09:54.181 Max Number of Namespaces: 256 00:09:54.181 Max Number of I/O Queues: 64 00:09:54.181 NVMe Specification Version (VS): 1.4 00:09:54.181 NVMe Specification Version (Identify): 1.4 00:09:54.181 Maximum Queue Entries: 2048 00:09:54.181 Contiguous Queues Required: Yes 00:09:54.181 Arbitration Mechanisms Supported 00:09:54.181 Weighted Round Robin: Not Supported 00:09:54.181 Vendor Specific: Not Supported 00:09:54.181 Reset Timeout: 7500 ms 00:09:54.181 Doorbell Stride: 4 bytes 00:09:54.181 NVM Subsystem Reset: Not Supported 00:09:54.181 Command Sets Supported 00:09:54.181 NVM Command Set: Supported 00:09:54.181 Boot Partition: Not Supported 00:09:54.181 Memory Page Size Minimum: 4096 bytes 00:09:54.181 Memory Page Size Maximum: 65536 bytes 00:09:54.181 Persistent Memory Region: Not Supported 00:09:54.181 Optional Asynchronous Events Supported 00:09:54.181 Namespace Attribute Notices: Supported 00:09:54.181 Firmware Activation Notices: Not Supported 00:09:54.181 ANA Change Notices: Not Supported 00:09:54.181 PLE Aggregate Log Change Notices: Not Supported 00:09:54.181 LBA Status Info Alert Notices: Not Supported 00:09:54.181 EGE Aggregate Log Change Notices: Not Supported 00:09:54.181 Normal NVM Subsystem Shutdown event: Not Supported 00:09:54.181 Zone Descriptor Change Notices: Not Supported 00:09:54.181 Discovery Log Change Notices: Not Supported 00:09:54.181 Controller Attributes 00:09:54.181 128-bit Host Identifier: Not Supported 00:09:54.181 Non-Operational Permissive Mode: Not Supported 00:09:54.181 NVM Sets: Not Supported 00:09:54.181 Read Recovery Levels: Not Supported 00:09:54.181 Endurance Groups: Not Supported 00:09:54.181 Predictable Latency Mode: Not Supported 00:09:54.181 Traffic Based Keep ALive: Not Supported 00:09:54.181 Namespace Granularity: Not Supported 00:09:54.181 SQ Associations: Not Supported 00:09:54.181 UUID List: Not Supported 00:09:54.181 Multi-Domain Subsystem: Not Supported 00:09:54.181 Fixed Capacity Management: Not Supported 00:09:54.181 Variable Capacity Management: Not Supported 00:09:54.181 Delete Endurance Group: Not Supported 00:09:54.181 Delete NVM Set: Not Supported 00:09:54.181 Extended LBA Formats Supported: Supported 00:09:54.181 Flexible Data Placement Supported: Not Supported 00:09:54.181 00:09:54.181 Controller Memory Buffer Support 00:09:54.181 ================================ 00:09:54.181 Supported: No 00:09:54.181 00:09:54.181 Persistent Memory Region Support 00:09:54.181 ================================ 00:09:54.181 Supported: No 00:09:54.181 00:09:54.181 Admin Command Set Attributes 00:09:54.181 ============================ 00:09:54.181 Security Send/Receive: Not Supported 00:09:54.181 Format NVM: Supported 00:09:54.181 Firmware Activate/Download: Not Supported 00:09:54.181 Namespace Management: Supported 00:09:54.181 Device Self-Test: Not Supported 00:09:54.181 Directives: Supported 00:09:54.181 NVMe-MI: Not Supported 00:09:54.181 Virtualization Management: Not Supported 00:09:54.181 Doorbell Buffer Config: Supported 00:09:54.181 Get LBA Status Capability: Not Supported 00:09:54.182 Command & Feature Lockdown Capability: Not Supported 00:09:54.182 Abort Command Limit: 4 00:09:54.182 Async Event Request Limit: 4 00:09:54.182 Number of Firmware Slots: N/A 00:09:54.182 Firmware Slot 1 Read-Only: N/A 00:09:54.182 Firmware Activation Without Reset: N/A 00:09:54.182 Multiple Update Detection Support: N/A 00:09:54.182 Firmware Update Granularity: No Information Provided 00:09:54.182 Per-Namespace SMART Log: Yes 00:09:54.182 Asymmetric Namespace Access Log Page: Not Supported 00:09:54.182 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:54.182 Command Effects Log Page: Supported 00:09:54.182 Get Log Page Extended Data: Supported 00:09:54.182 Telemetry Log Pages: Not Supported 00:09:54.182 Persistent Event Log Pages: Not Supported 00:09:54.182 Supported Log Pages Log Page: May Support 00:09:54.182 Commands Supported & Effects Log Page: Not Supported 00:09:54.182 Feature Identifiers & Effects Log Page:May Support 00:09:54.182 NVMe-MI Commands & Effects Log Page: May Support 00:09:54.182 Data Area 4 for Telemetry Log: Not Supported 00:09:54.182 Error Log Page Entries Supported: 1 00:09:54.182 Keep Alive: Not Supported 00:09:54.182 00:09:54.182 NVM Command Set Attributes 00:09:54.182 ========================== 00:09:54.182 Submission Queue Entry Size 00:09:54.182 Max: 64 00:09:54.182 Min: 64 00:09:54.182 Completion Queue Entry Size 00:09:54.182 Max: 16 00:09:54.182 Min: 16 00:09:54.182 Number of Namespaces: 256 00:09:54.182 Compare Command: Supported 00:09:54.182 Write Uncorrectable Command: Not Supported 00:09:54.182 Dataset Management Command: Supported 00:09:54.182 Write Zeroes Command: Supported 00:09:54.182 Set Features Save Field: Supported 00:09:54.182 Reservations: Not Supported 00:09:54.182 Timestamp: Supported 00:09:54.182 Copy: Supported 00:09:54.182 Volatile Write Cache: Present 00:09:54.182 Atomic Write Unit (Normal): 1 00:09:54.182 Atomic Write Unit (PFail): 1 00:09:54.182 Atomic Compare & Write Unit: 1 00:09:54.182 Fused Compare & Write: Not Supported 00:09:54.182 Scatter-Gather List 00:09:54.182 SGL Command Set: Supported 00:09:54.182 SGL Keyed: Not Supported 00:09:54.182 SGL Bit Bucket Descriptor: Not Supported 00:09:54.182 SGL Metadata Pointer: Not Supported 00:09:54.182 Oversized SGL: Not Supported 00:09:54.182 SGL Metadata Address: Not Supported 00:09:54.182 SGL Offset: Not Supported 00:09:54.182 Transport SGL Data Block: Not Supported 00:09:54.182 Replay Protected Memory Block: Not Supported 00:09:54.182 00:09:54.182 Firmware Slot Information 00:09:54.182 ========================= 00:09:54.182 Active slot: 1 00:09:54.182 Slot 1 Firmware Revision: 1.0 00:09:54.182 00:09:54.182 00:09:54.182 Commands Supported and Effects 00:09:54.182 ============================== 00:09:54.182 Admin Commands 00:09:54.182 -------------- 00:09:54.182 Delete I/O Submission Queue (00h): Supported 00:09:54.182 Create I/O Submission Queue (01h): Supported 00:09:54.182 Get Log Page (02h): Supported 00:09:54.182 Delete I/O Completion Queue (04h): Supported 00:09:54.182 Create I/O Completion Queue (05h): Supported 00:09:54.182 Identify (06h): Supported 00:09:54.182 Abort (08h): Supported 00:09:54.182 Set Features (09h): Supported 00:09:54.182 Get Features (0Ah): Supported 00:09:54.182 Asynchronous Event Request (0Ch): Supported 00:09:54.182 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:54.182 Directive Send (19h): Supported 00:09:54.182 Directive Receive (1Ah): Supported 00:09:54.182 Virtualization Management (1Ch): Supported 00:09:54.182 Doorbell Buffer Config (7Ch): Supported 00:09:54.182 Format NVM (80h): Supported LBA-Change 00:09:54.182 I/O Commands 00:09:54.182 ------------ 00:09:54.182 Flush (00h): Supported LBA-Change 00:09:54.182 Write (01h): Supported LBA-Change 00:09:54.182 Read (02h): Supported 00:09:54.182 Compare (05h): Supported 00:09:54.182 Write Zeroes (08h): Supported LBA-Change 00:09:54.182 Dataset Management (09h): Supported LBA-Change 00:09:54.182 Unknown (0Ch): Supported 00:09:54.182 Unknown (12h): Supported 00:09:54.182 Copy (19h): Supported LBA-Change 00:09:54.182 Unknown (1Dh): Supported LBA-Change 00:09:54.182 00:09:54.182 Error Log 00:09:54.182 ========= 00:09:54.182 00:09:54.182 Arbitration 00:09:54.182 =========== 00:09:54.182 Arbitration Burst: no limit 00:09:54.182 00:09:54.182 Power Management 00:09:54.182 ================ 00:09:54.182 Number of Power States: 1 00:09:54.182 Current Power State: Power State #0 00:09:54.182 Power State #0: 00:09:54.182 Max Power: 25.00 W 00:09:54.182 Non-Operational State: Operational 00:09:54.182 Entry Latency: 16 microseconds 00:09:54.182 Exit Latency: 4 microseconds 00:09:54.182 Relative Read Throughput: 0 00:09:54.182 Relative Read Latency: 0 00:09:54.182 Relative Write Throughput: 0 00:09:54.182 Relative Write Latency: 0 00:09:54.182 Idle Power: Not Reported 00:09:54.182 Active Power: Not Reported 00:09:54.182 Non-Operational Permissive Mode: Not Supported 00:09:54.182 00:09:54.182 Health Information 00:09:54.182 ================== 00:09:54.182 Critical Warnings: 00:09:54.182 Available Spare Space: OK 00:09:54.182 Temperature: OK 00:09:54.182 Device Reliability: OK 00:09:54.182 Read Only: No 00:09:54.182 Volatile Memory Backup: OK 00:09:54.182 Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.182 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:54.182 Available Spare: 0% 00:09:54.182 Available Spare Threshold: 0% 00:09:54.182 Life Percentage Used: 0% 00:09:54.182 Data Units Read: 1170 00:09:54.182 Data Units Written: 1037 00:09:54.182 Host Read Commands: 49474 00:09:54.182 Host Write Commands: 48249 00:09:54.182 Controller Busy Time: 0 minutes 00:09:54.182 Power Cycles: 0 00:09:54.182 Power On Hours: 0 hours 00:09:54.182 Unsafe Shutdowns: 0 00:09:54.182 Unrecoverable Media Errors: 0 00:09:54.182 Lifetime Error Log Entries: 0 00:09:54.182 Warning Temperature Time: 0 minutes 00:09:54.182 Critical Temperature Time: 0 minutes 00:09:54.182 00:09:54.182 Number of Queues 00:09:54.182 ================ 00:09:54.182 Number of I/O Submission Queues: 64 00:09:54.182 Number of I/O Completion Queues: 64 00:09:54.182 00:09:54.182 ZNS Specific Controller Data 00:09:54.182 ============================ 00:09:54.182 Zone Append Size Limit: 0 00:09:54.182 00:09:54.182 00:09:54.182 Active Namespaces 00:09:54.182 ================= 00:09:54.182 Namespace ID:1 00:09:54.182 Error Recovery Timeout: Unlimited 00:09:54.182 Command Set Identifier: NVM (00h) 00:09:54.182 Deallocate: Supported 00:09:54.182 Deallocated/Unwritten Error: Supported 00:09:54.182 Deallocated Read Value: All 0x00 00:09:54.182 Deallocate in Write Zeroes: Not Supported 00:09:54.182 Deallocated Guard Field: 0xFFFF 00:09:54.182 Flush: Supported 00:09:54.182 Reservation: Not Supported 00:09:54.182 Namespace Sharing Capabilities: Private 00:09:54.182 Size (in LBAs): 1310720 (5GiB) 00:09:54.182 Capacity (in LBAs): 1310720 (5GiB) 00:09:54.182 Utilization (in LBAs): 1310720 (5GiB) 00:09:54.182 Thin Provisioning: Not Supported 00:09:54.182 Per-NS Atomic Units: No 00:09:54.182 Maximum Single Source Range Length: 128 00:09:54.182 Maximum Copy Length: 128 00:09:54.182 Maximum Source Range Count: 128 00:09:54.182 NGUID/EUI64 Never Reused: No 00:09:54.182 Namespace Write Protected: No 00:09:54.182 Number of LBA Formats: 8 00:09:54.182 Current LBA Format: LBA Format #04 00:09:54.182 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:54.182 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:54.182 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:54.182 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:54.182 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:54.182 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:54.182 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:54.182 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:54.182 00:09:54.182 NVM Specific Namespace Data 00:09:54.182 =========================== 00:09:54.182 Logical Block Storage Tag Mask: 0 00:09:54.182 Protection Information Capabilities: 00:09:54.182 16b Guard Protection Information Storage Tag Support: No 00:09:54.182 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:54.182 Storage Tag Check Read Support: No 00:09:54.182 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.182 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.182 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.182 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.182 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.183 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.183 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.183 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.183 10:21:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:54.183 10:21:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:54.443 ===================================================== 00:09:54.443 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.443 ===================================================== 00:09:54.443 Controller Capabilities/Features 00:09:54.443 ================================ 00:09:54.443 Vendor ID: 1b36 00:09:54.443 Subsystem Vendor ID: 1af4 00:09:54.443 Serial Number: 12342 00:09:54.443 Model Number: QEMU NVMe Ctrl 00:09:54.443 Firmware Version: 8.0.0 00:09:54.443 Recommended Arb Burst: 6 00:09:54.443 IEEE OUI Identifier: 00 54 52 00:09:54.443 Multi-path I/O 00:09:54.443 May have multiple subsystem ports: No 00:09:54.443 May have multiple controllers: No 00:09:54.443 Associated with SR-IOV VF: No 00:09:54.443 Max Data Transfer Size: 524288 00:09:54.443 Max Number of Namespaces: 256 00:09:54.443 Max Number of I/O Queues: 64 00:09:54.443 NVMe Specification Version (VS): 1.4 00:09:54.443 NVMe Specification Version (Identify): 1.4 00:09:54.443 Maximum Queue Entries: 2048 00:09:54.443 Contiguous Queues Required: Yes 00:09:54.443 Arbitration Mechanisms Supported 00:09:54.443 Weighted Round Robin: Not Supported 00:09:54.443 Vendor Specific: Not Supported 00:09:54.443 Reset Timeout: 7500 ms 00:09:54.443 Doorbell Stride: 4 bytes 00:09:54.443 NVM Subsystem Reset: Not Supported 00:09:54.443 Command Sets Supported 00:09:54.443 NVM Command Set: Supported 00:09:54.443 Boot Partition: Not Supported 00:09:54.443 Memory Page Size Minimum: 4096 bytes 00:09:54.443 Memory Page Size Maximum: 65536 bytes 00:09:54.443 Persistent Memory Region: Not Supported 00:09:54.443 Optional Asynchronous Events Supported 00:09:54.443 Namespace Attribute Notices: Supported 00:09:54.443 Firmware Activation Notices: Not Supported 00:09:54.443 ANA Change Notices: Not Supported 00:09:54.443 PLE Aggregate Log Change Notices: Not Supported 00:09:54.443 LBA Status Info Alert Notices: Not Supported 00:09:54.443 EGE Aggregate Log Change Notices: Not Supported 00:09:54.443 Normal NVM Subsystem Shutdown event: Not Supported 00:09:54.443 Zone Descriptor Change Notices: Not Supported 00:09:54.443 Discovery Log Change Notices: Not Supported 00:09:54.443 Controller Attributes 00:09:54.443 128-bit Host Identifier: Not Supported 00:09:54.443 Non-Operational Permissive Mode: Not Supported 00:09:54.443 NVM Sets: Not Supported 00:09:54.443 Read Recovery Levels: Not Supported 00:09:54.443 Endurance Groups: Not Supported 00:09:54.443 Predictable Latency Mode: Not Supported 00:09:54.443 Traffic Based Keep ALive: Not Supported 00:09:54.443 Namespace Granularity: Not Supported 00:09:54.443 SQ Associations: Not Supported 00:09:54.443 UUID List: Not Supported 00:09:54.443 Multi-Domain Subsystem: Not Supported 00:09:54.443 Fixed Capacity Management: Not Supported 00:09:54.443 Variable Capacity Management: Not Supported 00:09:54.443 Delete Endurance Group: Not Supported 00:09:54.443 Delete NVM Set: Not Supported 00:09:54.443 Extended LBA Formats Supported: Supported 00:09:54.443 Flexible Data Placement Supported: Not Supported 00:09:54.443 00:09:54.443 Controller Memory Buffer Support 00:09:54.443 ================================ 00:09:54.443 Supported: No 00:09:54.443 00:09:54.443 Persistent Memory Region Support 00:09:54.443 ================================ 00:09:54.443 Supported: No 00:09:54.443 00:09:54.443 Admin Command Set Attributes 00:09:54.443 ============================ 00:09:54.443 Security Send/Receive: Not Supported 00:09:54.443 Format NVM: Supported 00:09:54.443 Firmware Activate/Download: Not Supported 00:09:54.443 Namespace Management: Supported 00:09:54.443 Device Self-Test: Not Supported 00:09:54.443 Directives: Supported 00:09:54.443 NVMe-MI: Not Supported 00:09:54.443 Virtualization Management: Not Supported 00:09:54.443 Doorbell Buffer Config: Supported 00:09:54.443 Get LBA Status Capability: Not Supported 00:09:54.443 Command & Feature Lockdown Capability: Not Supported 00:09:54.443 Abort Command Limit: 4 00:09:54.443 Async Event Request Limit: 4 00:09:54.443 Number of Firmware Slots: N/A 00:09:54.443 Firmware Slot 1 Read-Only: N/A 00:09:54.443 Firmware Activation Without Reset: N/A 00:09:54.443 Multiple Update Detection Support: N/A 00:09:54.443 Firmware Update Granularity: No Information Provided 00:09:54.443 Per-Namespace SMART Log: Yes 00:09:54.443 Asymmetric Namespace Access Log Page: Not Supported 00:09:54.443 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:54.443 Command Effects Log Page: Supported 00:09:54.443 Get Log Page Extended Data: Supported 00:09:54.443 Telemetry Log Pages: Not Supported 00:09:54.443 Persistent Event Log Pages: Not Supported 00:09:54.443 Supported Log Pages Log Page: May Support 00:09:54.443 Commands Supported & Effects Log Page: Not Supported 00:09:54.443 Feature Identifiers & Effects Log Page:May Support 00:09:54.443 NVMe-MI Commands & Effects Log Page: May Support 00:09:54.443 Data Area 4 for Telemetry Log: Not Supported 00:09:54.443 Error Log Page Entries Supported: 1 00:09:54.443 Keep Alive: Not Supported 00:09:54.443 00:09:54.443 NVM Command Set Attributes 00:09:54.443 ========================== 00:09:54.443 Submission Queue Entry Size 00:09:54.443 Max: 64 00:09:54.443 Min: 64 00:09:54.443 Completion Queue Entry Size 00:09:54.443 Max: 16 00:09:54.443 Min: 16 00:09:54.443 Number of Namespaces: 256 00:09:54.444 Compare Command: Supported 00:09:54.444 Write Uncorrectable Command: Not Supported 00:09:54.444 Dataset Management Command: Supported 00:09:54.444 Write Zeroes Command: Supported 00:09:54.444 Set Features Save Field: Supported 00:09:54.444 Reservations: Not Supported 00:09:54.444 Timestamp: Supported 00:09:54.444 Copy: Supported 00:09:54.444 Volatile Write Cache: Present 00:09:54.444 Atomic Write Unit (Normal): 1 00:09:54.444 Atomic Write Unit (PFail): 1 00:09:54.444 Atomic Compare & Write Unit: 1 00:09:54.444 Fused Compare & Write: Not Supported 00:09:54.444 Scatter-Gather List 00:09:54.444 SGL Command Set: Supported 00:09:54.444 SGL Keyed: Not Supported 00:09:54.444 SGL Bit Bucket Descriptor: Not Supported 00:09:54.444 SGL Metadata Pointer: Not Supported 00:09:54.444 Oversized SGL: Not Supported 00:09:54.444 SGL Metadata Address: Not Supported 00:09:54.444 SGL Offset: Not Supported 00:09:54.444 Transport SGL Data Block: Not Supported 00:09:54.444 Replay Protected Memory Block: Not Supported 00:09:54.444 00:09:54.444 Firmware Slot Information 00:09:54.444 ========================= 00:09:54.444 Active slot: 1 00:09:54.444 Slot 1 Firmware Revision: 1.0 00:09:54.444 00:09:54.444 00:09:54.444 Commands Supported and Effects 00:09:54.444 ============================== 00:09:54.444 Admin Commands 00:09:54.444 -------------- 00:09:54.444 Delete I/O Submission Queue (00h): Supported 00:09:54.444 Create I/O Submission Queue (01h): Supported 00:09:54.444 Get Log Page (02h): Supported 00:09:54.444 Delete I/O Completion Queue (04h): Supported 00:09:54.444 Create I/O Completion Queue (05h): Supported 00:09:54.444 Identify (06h): Supported 00:09:54.444 Abort (08h): Supported 00:09:54.444 Set Features (09h): Supported 00:09:54.444 Get Features (0Ah): Supported 00:09:54.444 Asynchronous Event Request (0Ch): Supported 00:09:54.444 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:54.444 Directive Send (19h): Supported 00:09:54.444 Directive Receive (1Ah): Supported 00:09:54.444 Virtualization Management (1Ch): Supported 00:09:54.444 Doorbell Buffer Config (7Ch): Supported 00:09:54.444 Format NVM (80h): Supported LBA-Change 00:09:54.444 I/O Commands 00:09:54.444 ------------ 00:09:54.444 Flush (00h): Supported LBA-Change 00:09:54.444 Write (01h): Supported LBA-Change 00:09:54.444 Read (02h): Supported 00:09:54.444 Compare (05h): Supported 00:09:54.444 Write Zeroes (08h): Supported LBA-Change 00:09:54.444 Dataset Management (09h): Supported LBA-Change 00:09:54.444 Unknown (0Ch): Supported 00:09:54.444 Unknown (12h): Supported 00:09:54.444 Copy (19h): Supported LBA-Change 00:09:54.444 Unknown (1Dh): Supported LBA-Change 00:09:54.444 00:09:54.444 Error Log 00:09:54.444 ========= 00:09:54.444 00:09:54.444 Arbitration 00:09:54.444 =========== 00:09:54.444 Arbitration Burst: no limit 00:09:54.444 00:09:54.444 Power Management 00:09:54.444 ================ 00:09:54.444 Number of Power States: 1 00:09:54.444 Current Power State: Power State #0 00:09:54.444 Power State #0: 00:09:54.444 Max Power: 25.00 W 00:09:54.444 Non-Operational State: Operational 00:09:54.444 Entry Latency: 16 microseconds 00:09:54.444 Exit Latency: 4 microseconds 00:09:54.444 Relative Read Throughput: 0 00:09:54.444 Relative Read Latency: 0 00:09:54.444 Relative Write Throughput: 0 00:09:54.444 Relative Write Latency: 0 00:09:54.444 Idle Power: Not Reported 00:09:54.444 Active Power: Not Reported 00:09:54.444 Non-Operational Permissive Mode: Not Supported 00:09:54.444 00:09:54.444 Health Information 00:09:54.444 ================== 00:09:54.444 Critical Warnings: 00:09:54.444 Available Spare Space: OK 00:09:54.444 Temperature: OK 00:09:54.444 Device Reliability: OK 00:09:54.444 Read Only: No 00:09:54.444 Volatile Memory Backup: OK 00:09:54.444 Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.444 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:54.444 Available Spare: 0% 00:09:54.444 Available Spare Threshold: 0% 00:09:54.444 Life Percentage Used: 0% 00:09:54.444 Data Units Read: 2593 00:09:54.444 Data Units Written: 2380 00:09:54.444 Host Read Commands: 106294 00:09:54.444 Host Write Commands: 104563 00:09:54.444 Controller Busy Time: 0 minutes 00:09:54.444 Power Cycles: 0 00:09:54.444 Power On Hours: 0 hours 00:09:54.444 Unsafe Shutdowns: 0 00:09:54.444 Unrecoverable Media Errors: 0 00:09:54.444 Lifetime Error Log Entries: 0 00:09:54.444 Warning Temperature Time: 0 minutes 00:09:54.444 Critical Temperature Time: 0 minutes 00:09:54.444 00:09:54.444 Number of Queues 00:09:54.444 ================ 00:09:54.444 Number of I/O Submission Queues: 64 00:09:54.444 Number of I/O Completion Queues: 64 00:09:54.444 00:09:54.444 ZNS Specific Controller Data 00:09:54.444 ============================ 00:09:54.444 Zone Append Size Limit: 0 00:09:54.444 00:09:54.444 00:09:54.444 Active Namespaces 00:09:54.444 ================= 00:09:54.444 Namespace ID:1 00:09:54.444 Error Recovery Timeout: Unlimited 00:09:54.444 Command Set Identifier: NVM (00h) 00:09:54.444 Deallocate: Supported 00:09:54.444 Deallocated/Unwritten Error: Supported 00:09:54.444 Deallocated Read Value: All 0x00 00:09:54.444 Deallocate in Write Zeroes: Not Supported 00:09:54.444 Deallocated Guard Field: 0xFFFF 00:09:54.444 Flush: Supported 00:09:54.444 Reservation: Not Supported 00:09:54.444 Namespace Sharing Capabilities: Private 00:09:54.444 Size (in LBAs): 1048576 (4GiB) 00:09:54.444 Capacity (in LBAs): 1048576 (4GiB) 00:09:54.444 Utilization (in LBAs): 1048576 (4GiB) 00:09:54.444 Thin Provisioning: Not Supported 00:09:54.444 Per-NS Atomic Units: No 00:09:54.444 Maximum Single Source Range Length: 128 00:09:54.444 Maximum Copy Length: 128 00:09:54.444 Maximum Source Range Count: 128 00:09:54.444 NGUID/EUI64 Never Reused: No 00:09:54.444 Namespace Write Protected: No 00:09:54.444 Number of LBA Formats: 8 00:09:54.444 Current LBA Format: LBA Format #04 00:09:54.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:54.444 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:54.444 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:54.444 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:54.444 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:54.444 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:54.444 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:54.444 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:54.444 00:09:54.444 NVM Specific Namespace Data 00:09:54.444 =========================== 00:09:54.444 Logical Block Storage Tag Mask: 0 00:09:54.444 Protection Information Capabilities: 00:09:54.444 16b Guard Protection Information Storage Tag Support: No 00:09:54.444 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:54.444 Storage Tag Check Read Support: No 00:09:54.444 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.444 Namespace ID:2 00:09:54.444 Error Recovery Timeout: Unlimited 00:09:54.444 Command Set Identifier: NVM (00h) 00:09:54.444 Deallocate: Supported 00:09:54.444 Deallocated/Unwritten Error: Supported 00:09:54.444 Deallocated Read Value: All 0x00 00:09:54.444 Deallocate in Write Zeroes: Not Supported 00:09:54.444 Deallocated Guard Field: 0xFFFF 00:09:54.444 Flush: Supported 00:09:54.444 Reservation: Not Supported 00:09:54.444 Namespace Sharing Capabilities: Private 00:09:54.444 Size (in LBAs): 1048576 (4GiB) 00:09:54.444 Capacity (in LBAs): 1048576 (4GiB) 00:09:54.444 Utilization (in LBAs): 1048576 (4GiB) 00:09:54.444 Thin Provisioning: Not Supported 00:09:54.444 Per-NS Atomic Units: No 00:09:54.444 Maximum Single Source Range Length: 128 00:09:54.444 Maximum Copy Length: 128 00:09:54.444 Maximum Source Range Count: 128 00:09:54.444 NGUID/EUI64 Never Reused: No 00:09:54.444 Namespace Write Protected: No 00:09:54.444 Number of LBA Formats: 8 00:09:54.444 Current LBA Format: LBA Format #04 00:09:54.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:54.444 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:54.444 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:54.444 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:54.444 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:54.444 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:54.444 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:54.445 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:54.445 00:09:54.445 NVM Specific Namespace Data 00:09:54.445 =========================== 00:09:54.445 Logical Block Storage Tag Mask: 0 00:09:54.445 Protection Information Capabilities: 00:09:54.445 16b Guard Protection Information Storage Tag Support: No 00:09:54.445 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:54.445 Storage Tag Check Read Support: No 00:09:54.445 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Namespace ID:3 00:09:54.445 Error Recovery Timeout: Unlimited 00:09:54.445 Command Set Identifier: NVM (00h) 00:09:54.445 Deallocate: Supported 00:09:54.445 Deallocated/Unwritten Error: Supported 00:09:54.445 Deallocated Read Value: All 0x00 00:09:54.445 Deallocate in Write Zeroes: Not Supported 00:09:54.445 Deallocated Guard Field: 0xFFFF 00:09:54.445 Flush: Supported 00:09:54.445 Reservation: Not Supported 00:09:54.445 Namespace Sharing Capabilities: Private 00:09:54.445 Size (in LBAs): 1048576 (4GiB) 00:09:54.445 Capacity (in LBAs): 1048576 (4GiB) 00:09:54.445 Utilization (in LBAs): 1048576 (4GiB) 00:09:54.445 Thin Provisioning: Not Supported 00:09:54.445 Per-NS Atomic Units: No 00:09:54.445 Maximum Single Source Range Length: 128 00:09:54.445 Maximum Copy Length: 128 00:09:54.445 Maximum Source Range Count: 128 00:09:54.445 NGUID/EUI64 Never Reused: No 00:09:54.445 Namespace Write Protected: No 00:09:54.445 Number of LBA Formats: 8 00:09:54.445 Current LBA Format: LBA Format #04 00:09:54.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:54.445 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:54.445 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:54.445 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:54.445 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:54.445 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:54.445 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:54.445 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:54.445 00:09:54.445 NVM Specific Namespace Data 00:09:54.445 =========================== 00:09:54.445 Logical Block Storage Tag Mask: 0 00:09:54.445 Protection Information Capabilities: 00:09:54.445 16b Guard Protection Information Storage Tag Support: No 00:09:54.445 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:54.445 Storage Tag Check Read Support: No 00:09:54.445 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.445 10:21:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:54.445 10:21:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:54.705 ===================================================== 00:09:54.705 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.705 ===================================================== 00:09:54.705 Controller Capabilities/Features 00:09:54.705 ================================ 00:09:54.705 Vendor ID: 1b36 00:09:54.705 Subsystem Vendor ID: 1af4 00:09:54.705 Serial Number: 12343 00:09:54.705 Model Number: QEMU NVMe Ctrl 00:09:54.705 Firmware Version: 8.0.0 00:09:54.705 Recommended Arb Burst: 6 00:09:54.705 IEEE OUI Identifier: 00 54 52 00:09:54.705 Multi-path I/O 00:09:54.705 May have multiple subsystem ports: No 00:09:54.705 May have multiple controllers: Yes 00:09:54.705 Associated with SR-IOV VF: No 00:09:54.705 Max Data Transfer Size: 524288 00:09:54.705 Max Number of Namespaces: 256 00:09:54.705 Max Number of I/O Queues: 64 00:09:54.705 NVMe Specification Version (VS): 1.4 00:09:54.705 NVMe Specification Version (Identify): 1.4 00:09:54.705 Maximum Queue Entries: 2048 00:09:54.705 Contiguous Queues Required: Yes 00:09:54.705 Arbitration Mechanisms Supported 00:09:54.705 Weighted Round Robin: Not Supported 00:09:54.705 Vendor Specific: Not Supported 00:09:54.705 Reset Timeout: 7500 ms 00:09:54.705 Doorbell Stride: 4 bytes 00:09:54.705 NVM Subsystem Reset: Not Supported 00:09:54.705 Command Sets Supported 00:09:54.705 NVM Command Set: Supported 00:09:54.705 Boot Partition: Not Supported 00:09:54.705 Memory Page Size Minimum: 4096 bytes 00:09:54.705 Memory Page Size Maximum: 65536 bytes 00:09:54.705 Persistent Memory Region: Not Supported 00:09:54.705 Optional Asynchronous Events Supported 00:09:54.705 Namespace Attribute Notices: Supported 00:09:54.705 Firmware Activation Notices: Not Supported 00:09:54.705 ANA Change Notices: Not Supported 00:09:54.705 PLE Aggregate Log Change Notices: Not Supported 00:09:54.705 LBA Status Info Alert Notices: Not Supported 00:09:54.705 EGE Aggregate Log Change Notices: Not Supported 00:09:54.705 Normal NVM Subsystem Shutdown event: Not Supported 00:09:54.705 Zone Descriptor Change Notices: Not Supported 00:09:54.705 Discovery Log Change Notices: Not Supported 00:09:54.705 Controller Attributes 00:09:54.705 128-bit Host Identifier: Not Supported 00:09:54.705 Non-Operational Permissive Mode: Not Supported 00:09:54.705 NVM Sets: Not Supported 00:09:54.705 Read Recovery Levels: Not Supported 00:09:54.705 Endurance Groups: Supported 00:09:54.705 Predictable Latency Mode: Not Supported 00:09:54.705 Traffic Based Keep ALive: Not Supported 00:09:54.705 Namespace Granularity: Not Supported 00:09:54.705 SQ Associations: Not Supported 00:09:54.705 UUID List: Not Supported 00:09:54.705 Multi-Domain Subsystem: Not Supported 00:09:54.705 Fixed Capacity Management: Not Supported 00:09:54.705 Variable Capacity Management: Not Supported 00:09:54.705 Delete Endurance Group: Not Supported 00:09:54.705 Delete NVM Set: Not Supported 00:09:54.705 Extended LBA Formats Supported: Supported 00:09:54.705 Flexible Data Placement Supported: Supported 00:09:54.705 00:09:54.705 Controller Memory Buffer Support 00:09:54.705 ================================ 00:09:54.705 Supported: No 00:09:54.705 00:09:54.705 Persistent Memory Region Support 00:09:54.705 ================================ 00:09:54.705 Supported: No 00:09:54.705 00:09:54.705 Admin Command Set Attributes 00:09:54.705 ============================ 00:09:54.705 Security Send/Receive: Not Supported 00:09:54.705 Format NVM: Supported 00:09:54.705 Firmware Activate/Download: Not Supported 00:09:54.705 Namespace Management: Supported 00:09:54.705 Device Self-Test: Not Supported 00:09:54.705 Directives: Supported 00:09:54.705 NVMe-MI: Not Supported 00:09:54.705 Virtualization Management: Not Supported 00:09:54.705 Doorbell Buffer Config: Supported 00:09:54.705 Get LBA Status Capability: Not Supported 00:09:54.705 Command & Feature Lockdown Capability: Not Supported 00:09:54.705 Abort Command Limit: 4 00:09:54.705 Async Event Request Limit: 4 00:09:54.705 Number of Firmware Slots: N/A 00:09:54.705 Firmware Slot 1 Read-Only: N/A 00:09:54.705 Firmware Activation Without Reset: N/A 00:09:54.705 Multiple Update Detection Support: N/A 00:09:54.706 Firmware Update Granularity: No Information Provided 00:09:54.706 Per-Namespace SMART Log: Yes 00:09:54.706 Asymmetric Namespace Access Log Page: Not Supported 00:09:54.706 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:54.706 Command Effects Log Page: Supported 00:09:54.706 Get Log Page Extended Data: Supported 00:09:54.706 Telemetry Log Pages: Not Supported 00:09:54.706 Persistent Event Log Pages: Not Supported 00:09:54.706 Supported Log Pages Log Page: May Support 00:09:54.706 Commands Supported & Effects Log Page: Not Supported 00:09:54.706 Feature Identifiers & Effects Log Page:May Support 00:09:54.706 NVMe-MI Commands & Effects Log Page: May Support 00:09:54.706 Data Area 4 for Telemetry Log: Not Supported 00:09:54.706 Error Log Page Entries Supported: 1 00:09:54.706 Keep Alive: Not Supported 00:09:54.706 00:09:54.706 NVM Command Set Attributes 00:09:54.706 ========================== 00:09:54.706 Submission Queue Entry Size 00:09:54.706 Max: 64 00:09:54.706 Min: 64 00:09:54.706 Completion Queue Entry Size 00:09:54.706 Max: 16 00:09:54.706 Min: 16 00:09:54.706 Number of Namespaces: 256 00:09:54.706 Compare Command: Supported 00:09:54.706 Write Uncorrectable Command: Not Supported 00:09:54.706 Dataset Management Command: Supported 00:09:54.706 Write Zeroes Command: Supported 00:09:54.706 Set Features Save Field: Supported 00:09:54.706 Reservations: Not Supported 00:09:54.706 Timestamp: Supported 00:09:54.706 Copy: Supported 00:09:54.706 Volatile Write Cache: Present 00:09:54.706 Atomic Write Unit (Normal): 1 00:09:54.706 Atomic Write Unit (PFail): 1 00:09:54.706 Atomic Compare & Write Unit: 1 00:09:54.706 Fused Compare & Write: Not Supported 00:09:54.706 Scatter-Gather List 00:09:54.706 SGL Command Set: Supported 00:09:54.706 SGL Keyed: Not Supported 00:09:54.706 SGL Bit Bucket Descriptor: Not Supported 00:09:54.706 SGL Metadata Pointer: Not Supported 00:09:54.706 Oversized SGL: Not Supported 00:09:54.706 SGL Metadata Address: Not Supported 00:09:54.706 SGL Offset: Not Supported 00:09:54.706 Transport SGL Data Block: Not Supported 00:09:54.706 Replay Protected Memory Block: Not Supported 00:09:54.706 00:09:54.706 Firmware Slot Information 00:09:54.706 ========================= 00:09:54.706 Active slot: 1 00:09:54.706 Slot 1 Firmware Revision: 1.0 00:09:54.706 00:09:54.706 00:09:54.706 Commands Supported and Effects 00:09:54.706 ============================== 00:09:54.706 Admin Commands 00:09:54.706 -------------- 00:09:54.706 Delete I/O Submission Queue (00h): Supported 00:09:54.706 Create I/O Submission Queue (01h): Supported 00:09:54.706 Get Log Page (02h): Supported 00:09:54.706 Delete I/O Completion Queue (04h): Supported 00:09:54.706 Create I/O Completion Queue (05h): Supported 00:09:54.706 Identify (06h): Supported 00:09:54.706 Abort (08h): Supported 00:09:54.706 Set Features (09h): Supported 00:09:54.706 Get Features (0Ah): Supported 00:09:54.706 Asynchronous Event Request (0Ch): Supported 00:09:54.706 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:54.706 Directive Send (19h): Supported 00:09:54.706 Directive Receive (1Ah): Supported 00:09:54.706 Virtualization Management (1Ch): Supported 00:09:54.706 Doorbell Buffer Config (7Ch): Supported 00:09:54.706 Format NVM (80h): Supported LBA-Change 00:09:54.706 I/O Commands 00:09:54.706 ------------ 00:09:54.706 Flush (00h): Supported LBA-Change 00:09:54.706 Write (01h): Supported LBA-Change 00:09:54.706 Read (02h): Supported 00:09:54.706 Compare (05h): Supported 00:09:54.706 Write Zeroes (08h): Supported LBA-Change 00:09:54.706 Dataset Management (09h): Supported LBA-Change 00:09:54.706 Unknown (0Ch): Supported 00:09:54.706 Unknown (12h): Supported 00:09:54.706 Copy (19h): Supported LBA-Change 00:09:54.706 Unknown (1Dh): Supported LBA-Change 00:09:54.706 00:09:54.706 Error Log 00:09:54.706 ========= 00:09:54.706 00:09:54.706 Arbitration 00:09:54.706 =========== 00:09:54.706 Arbitration Burst: no limit 00:09:54.706 00:09:54.706 Power Management 00:09:54.706 ================ 00:09:54.706 Number of Power States: 1 00:09:54.706 Current Power State: Power State #0 00:09:54.706 Power State #0: 00:09:54.706 Max Power: 25.00 W 00:09:54.706 Non-Operational State: Operational 00:09:54.706 Entry Latency: 16 microseconds 00:09:54.706 Exit Latency: 4 microseconds 00:09:54.706 Relative Read Throughput: 0 00:09:54.706 Relative Read Latency: 0 00:09:54.706 Relative Write Throughput: 0 00:09:54.706 Relative Write Latency: 0 00:09:54.706 Idle Power: Not Reported 00:09:54.706 Active Power: Not Reported 00:09:54.706 Non-Operational Permissive Mode: Not Supported 00:09:54.706 00:09:54.706 Health Information 00:09:54.706 ================== 00:09:54.706 Critical Warnings: 00:09:54.706 Available Spare Space: OK 00:09:54.706 Temperature: OK 00:09:54.706 Device Reliability: OK 00:09:54.706 Read Only: No 00:09:54.706 Volatile Memory Backup: OK 00:09:54.706 Current Temperature: 323 Kelvin (50 Celsius) 00:09:54.706 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:54.706 Available Spare: 0% 00:09:54.706 Available Spare Threshold: 0% 00:09:54.706 Life Percentage Used: 0% 00:09:54.706 Data Units Read: 1098 00:09:54.706 Data Units Written: 1027 00:09:54.706 Host Read Commands: 37274 00:09:54.706 Host Write Commands: 36697 00:09:54.706 Controller Busy Time: 0 minutes 00:09:54.706 Power Cycles: 0 00:09:54.706 Power On Hours: 0 hours 00:09:54.706 Unsafe Shutdowns: 0 00:09:54.706 Unrecoverable Media Errors: 0 00:09:54.706 Lifetime Error Log Entries: 0 00:09:54.706 Warning Temperature Time: 0 minutes 00:09:54.706 Critical Temperature Time: 0 minutes 00:09:54.706 00:09:54.706 Number of Queues 00:09:54.706 ================ 00:09:54.706 Number of I/O Submission Queues: 64 00:09:54.706 Number of I/O Completion Queues: 64 00:09:54.706 00:09:54.706 ZNS Specific Controller Data 00:09:54.706 ============================ 00:09:54.706 Zone Append Size Limit: 0 00:09:54.706 00:09:54.706 00:09:54.706 Active Namespaces 00:09:54.706 ================= 00:09:54.706 Namespace ID:1 00:09:54.706 Error Recovery Timeout: Unlimited 00:09:54.706 Command Set Identifier: NVM (00h) 00:09:54.706 Deallocate: Supported 00:09:54.706 Deallocated/Unwritten Error: Supported 00:09:54.706 Deallocated Read Value: All 0x00 00:09:54.706 Deallocate in Write Zeroes: Not Supported 00:09:54.706 Deallocated Guard Field: 0xFFFF 00:09:54.706 Flush: Supported 00:09:54.706 Reservation: Not Supported 00:09:54.706 Namespace Sharing Capabilities: Multiple Controllers 00:09:54.706 Size (in LBAs): 262144 (1GiB) 00:09:54.706 Capacity (in LBAs): 262144 (1GiB) 00:09:54.706 Utilization (in LBAs): 262144 (1GiB) 00:09:54.706 Thin Provisioning: Not Supported 00:09:54.706 Per-NS Atomic Units: No 00:09:54.706 Maximum Single Source Range Length: 128 00:09:54.706 Maximum Copy Length: 128 00:09:54.706 Maximum Source Range Count: 128 00:09:54.706 NGUID/EUI64 Never Reused: No 00:09:54.706 Namespace Write Protected: No 00:09:54.706 Endurance group ID: 1 00:09:54.706 Number of LBA Formats: 8 00:09:54.706 Current LBA Format: LBA Format #04 00:09:54.706 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:54.706 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:54.706 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:54.706 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:54.706 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:54.706 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:54.706 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:54.706 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:54.706 00:09:54.706 Get Feature FDP: 00:09:54.706 ================ 00:09:54.706 Enabled: Yes 00:09:54.706 FDP configuration index: 0 00:09:54.706 00:09:54.706 FDP configurations log page 00:09:54.706 =========================== 00:09:54.706 Number of FDP configurations: 1 00:09:54.706 Version: 0 00:09:54.706 Size: 112 00:09:54.706 FDP Configuration Descriptor: 0 00:09:54.706 Descriptor Size: 96 00:09:54.707 Reclaim Group Identifier format: 2 00:09:54.707 FDP Volatile Write Cache: Not Present 00:09:54.707 FDP Configuration: Valid 00:09:54.707 Vendor Specific Size: 0 00:09:54.707 Number of Reclaim Groups: 2 00:09:54.707 Number of Recalim Unit Handles: 8 00:09:54.707 Max Placement Identifiers: 128 00:09:54.707 Number of Namespaces Suppprted: 256 00:09:54.707 Reclaim unit Nominal Size: 6000000 bytes 00:09:54.707 Estimated Reclaim Unit Time Limit: Not Reported 00:09:54.707 RUH Desc #000: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #001: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #002: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #003: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #004: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #005: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #006: RUH Type: Initially Isolated 00:09:54.707 RUH Desc #007: RUH Type: Initially Isolated 00:09:54.707 00:09:54.707 FDP reclaim unit handle usage log page 00:09:54.707 ====================================== 00:09:54.707 Number of Reclaim Unit Handles: 8 00:09:54.707 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:54.707 RUH Usage Desc #001: RUH Attributes: Unused 00:09:54.707 RUH Usage Desc #002: RUH Attributes: Unused 00:09:54.707 RUH Usage Desc #003: RUH Attributes: Unused 00:09:54.707 RUH Usage Desc #004: RUH Attributes: Unused 00:09:54.707 RUH Usage Desc #005: RUH Attributes: Unused 00:09:54.707 RUH Usage Desc #006: RUH Attributes: Unused 00:09:54.707 RUH Usage Desc #007: RUH Attributes: Unused 00:09:54.707 00:09:54.707 FDP statistics log page 00:09:54.707 ======================= 00:09:54.707 Host bytes with metadata written: 637968384 00:09:54.707 Media bytes with metadata written: 638029824 00:09:54.707 Media bytes erased: 0 00:09:54.707 00:09:54.707 FDP events log page 00:09:54.707 =================== 00:09:54.707 Number of FDP events: 0 00:09:54.707 00:09:54.707 NVM Specific Namespace Data 00:09:54.707 =========================== 00:09:54.707 Logical Block Storage Tag Mask: 0 00:09:54.707 Protection Information Capabilities: 00:09:54.707 16b Guard Protection Information Storage Tag Support: No 00:09:54.707 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:54.707 Storage Tag Check Read Support: No 00:09:54.707 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:54.707 ************************************ 00:09:54.707 END TEST nvme_identify 00:09:54.707 ************************************ 00:09:54.707 00:09:54.707 real 0m1.727s 00:09:54.707 user 0m0.627s 00:09:54.707 sys 0m0.885s 00:09:54.707 10:21:53 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.707 10:21:53 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:54.966 10:21:54 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:54.966 10:21:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.966 10:21:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.966 10:21:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.966 ************************************ 00:09:54.966 START TEST nvme_perf 00:09:54.966 ************************************ 00:09:54.966 10:21:54 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:54.966 10:21:54 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:56.347 Initializing NVMe Controllers 00:09:56.347 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:56.347 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:56.347 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:56.347 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:56.347 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:56.347 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:56.347 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:56.347 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:56.347 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:56.347 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:56.347 Initialization complete. Launching workers. 00:09:56.347 ======================================================== 00:09:56.347 Latency(us) 00:09:56.347 Device Information : IOPS MiB/s Average min max 00:09:56.347 PCIE (0000:00:10.0) NSID 1 from core 0: 14173.01 166.09 9050.71 7787.02 51811.75 00:09:56.347 PCIE (0000:00:11.0) NSID 1 from core 0: 14173.01 166.09 9037.58 7907.55 49914.98 00:09:56.347 PCIE (0000:00:13.0) NSID 1 from core 0: 14173.01 166.09 9022.36 7895.28 48601.11 00:09:56.347 PCIE (0000:00:12.0) NSID 1 from core 0: 14173.01 166.09 9007.43 7837.54 46585.38 00:09:56.347 PCIE (0000:00:12.0) NSID 2 from core 0: 14173.01 166.09 8992.07 7868.57 44557.01 00:09:56.347 PCIE (0000:00:12.0) NSID 3 from core 0: 14236.85 166.84 8937.32 7843.65 37575.04 00:09:56.347 ======================================================== 00:09:56.347 Total : 85101.88 997.29 9007.86 7787.02 51811.75 00:09:56.347 00:09:56.347 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:56.347 ================================================================================= 00:09:56.347 1.00000% : 8001.182us 00:09:56.347 10.00000% : 8211.740us 00:09:56.347 25.00000% : 8422.297us 00:09:56.347 50.00000% : 8685.494us 00:09:56.347 75.00000% : 9001.330us 00:09:56.347 90.00000% : 9211.888us 00:09:56.347 95.00000% : 9422.445us 00:09:56.347 98.00000% : 10106.757us 00:09:56.347 99.00000% : 11159.544us 00:09:56.347 99.50000% : 44848.733us 00:09:56.347 99.90000% : 51376.013us 00:09:56.347 99.99000% : 51797.128us 00:09:56.347 99.99900% : 52007.685us 00:09:56.347 99.99990% : 52007.685us 00:09:56.347 99.99999% : 52007.685us 00:09:56.347 00:09:56.347 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:56.347 ================================================================================= 00:09:56.347 1.00000% : 8053.822us 00:09:56.347 10.00000% : 8317.018us 00:09:56.347 25.00000% : 8474.937us 00:09:56.347 50.00000% : 8685.494us 00:09:56.347 75.00000% : 8948.691us 00:09:56.347 90.00000% : 9159.248us 00:09:56.347 95.00000% : 9369.806us 00:09:56.347 98.00000% : 10106.757us 00:09:56.347 99.00000% : 11580.659us 00:09:56.347 99.50000% : 43164.273us 00:09:56.347 99.90000% : 49691.553us 00:09:56.347 99.99000% : 49902.111us 00:09:56.347 99.99900% : 50112.668us 00:09:56.347 99.99990% : 50112.668us 00:09:56.347 99.99999% : 50112.668us 00:09:56.347 00:09:56.347 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:56.347 ================================================================================= 00:09:56.347 1.00000% : 8053.822us 00:09:56.347 10.00000% : 8317.018us 00:09:56.347 25.00000% : 8474.937us 00:09:56.347 50.00000% : 8685.494us 00:09:56.347 75.00000% : 8948.691us 00:09:56.347 90.00000% : 9159.248us 00:09:56.347 95.00000% : 9369.806us 00:09:56.347 98.00000% : 10001.478us 00:09:56.347 99.00000% : 11422.741us 00:09:56.347 99.50000% : 41900.929us 00:09:56.347 99.90000% : 48217.651us 00:09:56.347 99.99000% : 48638.766us 00:09:56.347 99.99900% : 48638.766us 00:09:56.347 99.99990% : 48638.766us 00:09:56.347 99.99999% : 48638.766us 00:09:56.347 00:09:56.347 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:56.347 ================================================================================= 00:09:56.347 1.00000% : 8053.822us 00:09:56.347 10.00000% : 8317.018us 00:09:56.347 25.00000% : 8474.937us 00:09:56.347 50.00000% : 8685.494us 00:09:56.347 75.00000% : 8948.691us 00:09:56.347 90.00000% : 9159.248us 00:09:56.347 95.00000% : 9369.806us 00:09:56.347 98.00000% : 10054.117us 00:09:56.347 99.00000% : 11738.577us 00:09:56.347 99.50000% : 40005.912us 00:09:56.347 99.90000% : 46322.635us 00:09:56.347 99.99000% : 46743.749us 00:09:56.347 99.99900% : 46743.749us 00:09:56.347 99.99990% : 46743.749us 00:09:56.347 99.99999% : 46743.749us 00:09:56.347 00:09:56.347 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:56.347 ================================================================================= 00:09:56.347 1.00000% : 8053.822us 00:09:56.347 10.00000% : 8317.018us 00:09:56.347 25.00000% : 8474.937us 00:09:56.347 50.00000% : 8685.494us 00:09:56.347 75.00000% : 8948.691us 00:09:56.347 90.00000% : 9159.248us 00:09:56.347 95.00000% : 9369.806us 00:09:56.347 98.00000% : 10054.117us 00:09:56.347 99.00000% : 12107.052us 00:09:56.347 99.50000% : 37900.337us 00:09:56.347 99.90000% : 44217.060us 00:09:56.347 99.99000% : 44638.175us 00:09:56.347 99.99900% : 44638.175us 00:09:56.347 99.99990% : 44638.175us 00:09:56.347 99.99999% : 44638.175us 00:09:56.347 00:09:56.347 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:56.347 ================================================================================= 00:09:56.347 1.00000% : 8053.822us 00:09:56.347 10.00000% : 8317.018us 00:09:56.347 25.00000% : 8474.937us 00:09:56.347 50.00000% : 8685.494us 00:09:56.347 75.00000% : 8948.691us 00:09:56.347 90.00000% : 9159.248us 00:09:56.347 95.00000% : 9369.806us 00:09:56.347 98.00000% : 10212.035us 00:09:56.347 99.00000% : 12475.528us 00:09:56.347 99.50000% : 31162.500us 00:09:56.347 99.90000% : 37268.665us 00:09:56.347 99.99000% : 37689.780us 00:09:56.347 99.99900% : 37689.780us 00:09:56.347 99.99990% : 37689.780us 00:09:56.347 99.99999% : 37689.780us 00:09:56.347 00:09:56.347 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:56.347 ============================================================================== 00:09:56.347 Range in us Cumulative IO count 00:09:56.347 7737.986 - 7790.625: 0.0070% ( 1) 00:09:56.347 7790.625 - 7843.264: 0.0422% ( 5) 00:09:56.347 7843.264 - 7895.904: 0.2534% ( 30) 00:09:56.347 7895.904 - 7948.543: 0.7320% ( 68) 00:09:56.347 7948.543 - 8001.182: 1.6188% ( 126) 00:09:56.347 8001.182 - 8053.822: 2.9350% ( 187) 00:09:56.347 8053.822 - 8106.461: 4.8635% ( 274) 00:09:56.347 8106.461 - 8159.100: 7.3691% ( 356) 00:09:56.347 8159.100 - 8211.740: 10.3252% ( 420) 00:09:56.347 8211.740 - 8264.379: 13.7739% ( 490) 00:09:56.347 8264.379 - 8317.018: 17.7013% ( 558) 00:09:56.347 8317.018 - 8369.658: 21.7976% ( 582) 00:09:56.347 8369.658 - 8422.297: 26.0628% ( 606) 00:09:56.347 8422.297 - 8474.937: 30.5884% ( 643) 00:09:56.347 8474.937 - 8527.576: 35.4730% ( 694) 00:09:56.347 8527.576 - 8580.215: 40.2449% ( 678) 00:09:56.347 8580.215 - 8632.855: 45.0591% ( 684) 00:09:56.347 8632.855 - 8685.494: 50.1197% ( 719) 00:09:56.347 8685.494 - 8738.133: 55.2154% ( 724) 00:09:56.348 8738.133 - 8790.773: 60.2126% ( 710) 00:09:56.348 8790.773 - 8843.412: 65.2379% ( 714) 00:09:56.348 8843.412 - 8896.051: 70.0662% ( 686) 00:09:56.348 8896.051 - 8948.691: 74.6270% ( 648) 00:09:56.348 8948.691 - 9001.330: 78.9837% ( 619) 00:09:56.348 9001.330 - 9053.969: 82.6717% ( 524) 00:09:56.348 9053.969 - 9106.609: 85.8953% ( 458) 00:09:56.348 9106.609 - 9159.248: 88.6613% ( 393) 00:09:56.348 9159.248 - 9211.888: 90.8713% ( 314) 00:09:56.348 9211.888 - 9264.527: 92.4972% ( 231) 00:09:56.348 9264.527 - 9317.166: 93.7782% ( 182) 00:09:56.348 9317.166 - 9369.806: 94.7283% ( 135) 00:09:56.348 9369.806 - 9422.445: 95.4673% ( 105) 00:09:56.348 9422.445 - 9475.084: 96.0586% ( 84) 00:09:56.348 9475.084 - 9527.724: 96.4809% ( 60) 00:09:56.348 9527.724 - 9580.363: 96.8609% ( 54) 00:09:56.348 9580.363 - 9633.002: 97.1002% ( 34) 00:09:56.348 9633.002 - 9685.642: 97.2551% ( 22) 00:09:56.348 9685.642 - 9738.281: 97.3606% ( 15) 00:09:56.348 9738.281 - 9790.920: 97.4451% ( 12) 00:09:56.348 9790.920 - 9843.560: 97.5788% ( 19) 00:09:56.348 9843.560 - 9896.199: 97.6562% ( 11) 00:09:56.348 9896.199 - 9948.839: 97.7618% ( 15) 00:09:56.348 9948.839 - 10001.478: 97.8463% ( 12) 00:09:56.348 10001.478 - 10054.117: 97.9448% ( 14) 00:09:56.348 10054.117 - 10106.757: 98.0363% ( 13) 00:09:56.348 10106.757 - 10159.396: 98.1208% ( 12) 00:09:56.348 10159.396 - 10212.035: 98.2052% ( 12) 00:09:56.348 10212.035 - 10264.675: 98.2967% ( 13) 00:09:56.348 10264.675 - 10317.314: 98.3671% ( 10) 00:09:56.348 10317.314 - 10369.953: 98.4657% ( 14) 00:09:56.348 10369.953 - 10422.593: 98.5501% ( 12) 00:09:56.348 10422.593 - 10475.232: 98.6346% ( 12) 00:09:56.348 10475.232 - 10527.871: 98.6909% ( 8) 00:09:56.348 10527.871 - 10580.511: 98.7472% ( 8) 00:09:56.348 10580.511 - 10633.150: 98.7824% ( 5) 00:09:56.348 10633.150 - 10685.790: 98.8176% ( 5) 00:09:56.348 10685.790 - 10738.429: 98.8668% ( 7) 00:09:56.348 10738.429 - 10791.068: 98.9020% ( 5) 00:09:56.348 10791.068 - 10843.708: 98.9161% ( 2) 00:09:56.348 10843.708 - 10896.347: 98.9302% ( 2) 00:09:56.348 10896.347 - 10948.986: 98.9513% ( 3) 00:09:56.348 10948.986 - 11001.626: 98.9583% ( 1) 00:09:56.348 11001.626 - 11054.265: 98.9865% ( 4) 00:09:56.348 11054.265 - 11106.904: 98.9935% ( 1) 00:09:56.348 11106.904 - 11159.544: 99.0146% ( 3) 00:09:56.348 11212.183 - 11264.822: 99.0287% ( 2) 00:09:56.348 11264.822 - 11317.462: 99.0498% ( 3) 00:09:56.348 11317.462 - 11370.101: 99.0569% ( 1) 00:09:56.348 11370.101 - 11422.741: 99.0709% ( 2) 00:09:56.348 11422.741 - 11475.380: 99.0850% ( 2) 00:09:56.348 11475.380 - 11528.019: 99.0991% ( 2) 00:09:56.348 43164.273 - 43374.831: 99.1554% ( 8) 00:09:56.348 43374.831 - 43585.388: 99.1976% ( 6) 00:09:56.348 43585.388 - 43795.945: 99.2539% ( 8) 00:09:56.348 43795.945 - 44006.503: 99.2962% ( 6) 00:09:56.348 44006.503 - 44217.060: 99.3454% ( 7) 00:09:56.348 44217.060 - 44427.618: 99.4088% ( 9) 00:09:56.348 44427.618 - 44638.175: 99.4581% ( 7) 00:09:56.348 44638.175 - 44848.733: 99.5073% ( 7) 00:09:56.348 44848.733 - 45059.290: 99.5495% ( 6) 00:09:56.348 49902.111 - 50112.668: 99.5777% ( 4) 00:09:56.348 50112.668 - 50323.226: 99.6340% ( 8) 00:09:56.348 50323.226 - 50533.783: 99.6833% ( 7) 00:09:56.348 50533.783 - 50744.341: 99.7396% ( 8) 00:09:56.348 50744.341 - 50954.898: 99.7818% ( 6) 00:09:56.348 50954.898 - 51165.455: 99.8452% ( 9) 00:09:56.348 51165.455 - 51376.013: 99.9015% ( 8) 00:09:56.348 51376.013 - 51586.570: 99.9507% ( 7) 00:09:56.348 51586.570 - 51797.128: 99.9930% ( 6) 00:09:56.348 51797.128 - 52007.685: 100.0000% ( 1) 00:09:56.348 00:09:56.348 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:56.348 ============================================================================== 00:09:56.348 Range in us Cumulative IO count 00:09:56.348 7895.904 - 7948.543: 0.1197% ( 17) 00:09:56.348 7948.543 - 8001.182: 0.4153% ( 42) 00:09:56.348 8001.182 - 8053.822: 1.1613% ( 106) 00:09:56.348 8053.822 - 8106.461: 2.3930% ( 175) 00:09:56.348 8106.461 - 8159.100: 4.0118% ( 230) 00:09:56.348 8159.100 - 8211.740: 6.3204% ( 328) 00:09:56.348 8211.740 - 8264.379: 9.8395% ( 500) 00:09:56.348 8264.379 - 8317.018: 13.8232% ( 566) 00:09:56.348 8317.018 - 8369.658: 18.2855% ( 634) 00:09:56.348 8369.658 - 8422.297: 23.1067% ( 685) 00:09:56.348 8422.297 - 8474.937: 28.2376% ( 729) 00:09:56.348 8474.937 - 8527.576: 33.5656% ( 757) 00:09:56.348 8527.576 - 8580.215: 38.9992% ( 772) 00:09:56.348 8580.215 - 8632.855: 44.7283% ( 814) 00:09:56.348 8632.855 - 8685.494: 50.5983% ( 834) 00:09:56.348 8685.494 - 8738.133: 56.5738% ( 849) 00:09:56.348 8738.133 - 8790.773: 62.4859% ( 840) 00:09:56.348 8790.773 - 8843.412: 68.1095% ( 799) 00:09:56.348 8843.412 - 8896.051: 73.3038% ( 738) 00:09:56.348 8896.051 - 8948.691: 78.0898% ( 680) 00:09:56.348 8948.691 - 9001.330: 82.4606% ( 621) 00:09:56.348 9001.330 - 9053.969: 85.9305% ( 493) 00:09:56.348 9053.969 - 9106.609: 88.5769% ( 376) 00:09:56.348 9106.609 - 9159.248: 90.7306% ( 306) 00:09:56.348 9159.248 - 9211.888: 92.4127% ( 239) 00:09:56.348 9211.888 - 9264.527: 93.7430% ( 189) 00:09:56.348 9264.527 - 9317.166: 94.7002% ( 136) 00:09:56.348 9317.166 - 9369.806: 95.5096% ( 115) 00:09:56.348 9369.806 - 9422.445: 96.1219% ( 87) 00:09:56.348 9422.445 - 9475.084: 96.5935% ( 67) 00:09:56.348 9475.084 - 9527.724: 96.8257% ( 33) 00:09:56.348 9527.724 - 9580.363: 97.0017% ( 25) 00:09:56.348 9580.363 - 9633.002: 97.1354% ( 19) 00:09:56.348 9633.002 - 9685.642: 97.2480% ( 16) 00:09:56.348 9685.642 - 9738.281: 97.3606% ( 16) 00:09:56.348 9738.281 - 9790.920: 97.4733% ( 16) 00:09:56.348 9790.920 - 9843.560: 97.5718% ( 14) 00:09:56.348 9843.560 - 9896.199: 97.6703% ( 14) 00:09:56.348 9896.199 - 9948.839: 97.7759% ( 15) 00:09:56.348 9948.839 - 10001.478: 97.8744% ( 14) 00:09:56.348 10001.478 - 10054.117: 97.9870% ( 16) 00:09:56.348 10054.117 - 10106.757: 98.0997% ( 16) 00:09:56.348 10106.757 - 10159.396: 98.2052% ( 15) 00:09:56.348 10159.396 - 10212.035: 98.3108% ( 15) 00:09:56.348 10212.035 - 10264.675: 98.4093% ( 14) 00:09:56.348 10264.675 - 10317.314: 98.5079% ( 14) 00:09:56.348 10317.314 - 10369.953: 98.5923% ( 12) 00:09:56.348 10369.953 - 10422.593: 98.6486% ( 8) 00:09:56.348 10475.232 - 10527.871: 98.6698% ( 3) 00:09:56.348 10527.871 - 10580.511: 98.6909% ( 3) 00:09:56.348 10580.511 - 10633.150: 98.7050% ( 2) 00:09:56.348 10633.150 - 10685.790: 98.7190% ( 2) 00:09:56.348 10685.790 - 10738.429: 98.7331% ( 2) 00:09:56.348 10738.429 - 10791.068: 98.7542% ( 3) 00:09:56.348 10791.068 - 10843.708: 98.7753% ( 3) 00:09:56.348 10843.708 - 10896.347: 98.7894% ( 2) 00:09:56.348 10896.347 - 10948.986: 98.8035% ( 2) 00:09:56.348 10948.986 - 11001.626: 98.8246% ( 3) 00:09:56.348 11001.626 - 11054.265: 98.8387% ( 2) 00:09:56.348 11054.265 - 11106.904: 98.8528% ( 2) 00:09:56.348 11106.904 - 11159.544: 98.8739% ( 3) 00:09:56.348 11159.544 - 11212.183: 98.8880% ( 2) 00:09:56.348 11212.183 - 11264.822: 98.9091% ( 3) 00:09:56.348 11264.822 - 11317.462: 98.9231% ( 2) 00:09:56.348 11317.462 - 11370.101: 98.9372% ( 2) 00:09:56.348 11370.101 - 11422.741: 98.9513% ( 2) 00:09:56.348 11422.741 - 11475.380: 98.9724% ( 3) 00:09:56.348 11475.380 - 11528.019: 98.9865% ( 2) 00:09:56.348 11528.019 - 11580.659: 99.0076% ( 3) 00:09:56.348 11580.659 - 11633.298: 99.0217% ( 2) 00:09:56.348 11633.298 - 11685.937: 99.0428% ( 3) 00:09:56.348 11685.937 - 11738.577: 99.0639% ( 3) 00:09:56.348 11738.577 - 11791.216: 99.0850% ( 3) 00:09:56.348 11791.216 - 11843.855: 99.0991% ( 2) 00:09:56.348 41479.814 - 41690.371: 99.1273% ( 4) 00:09:56.348 41690.371 - 41900.929: 99.1836% ( 8) 00:09:56.348 41900.929 - 42111.486: 99.2399% ( 8) 00:09:56.348 42111.486 - 42322.043: 99.2962% ( 8) 00:09:56.348 42322.043 - 42532.601: 99.3525% ( 8) 00:09:56.348 42532.601 - 42743.158: 99.4088% ( 8) 00:09:56.348 42743.158 - 42953.716: 99.4651% ( 8) 00:09:56.348 42953.716 - 43164.273: 99.5214% ( 8) 00:09:56.348 43164.273 - 43374.831: 99.5495% ( 4) 00:09:56.348 48217.651 - 48428.209: 99.5918% ( 6) 00:09:56.348 48428.209 - 48638.766: 99.6410% ( 7) 00:09:56.348 48638.766 - 48849.324: 99.6974% ( 8) 00:09:56.348 48849.324 - 49059.881: 99.7607% ( 9) 00:09:56.348 49059.881 - 49270.439: 99.8170% ( 8) 00:09:56.348 49270.439 - 49480.996: 99.8733% ( 8) 00:09:56.348 49480.996 - 49691.553: 99.9367% ( 9) 00:09:56.348 49691.553 - 49902.111: 99.9930% ( 8) 00:09:56.348 49902.111 - 50112.668: 100.0000% ( 1) 00:09:56.348 00:09:56.348 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:56.348 ============================================================================== 00:09:56.348 Range in us Cumulative IO count 00:09:56.348 7843.264 - 7895.904: 0.0070% ( 1) 00:09:56.348 7895.904 - 7948.543: 0.1197% ( 16) 00:09:56.348 7948.543 - 8001.182: 0.4645% ( 49) 00:09:56.348 8001.182 - 8053.822: 1.2035% ( 105) 00:09:56.348 8053.822 - 8106.461: 2.2874% ( 154) 00:09:56.348 8106.461 - 8159.100: 3.9555% ( 237) 00:09:56.348 8159.100 - 8211.740: 6.4611% ( 356) 00:09:56.348 8211.740 - 8264.379: 9.7410% ( 466) 00:09:56.348 8264.379 - 8317.018: 13.6754% ( 559) 00:09:56.348 8317.018 - 8369.658: 18.1377% ( 634) 00:09:56.348 8369.658 - 8422.297: 23.1278% ( 709) 00:09:56.348 8422.297 - 8474.937: 28.2517% ( 728) 00:09:56.348 8474.937 - 8527.576: 33.6078% ( 761) 00:09:56.348 8527.576 - 8580.215: 39.0484% ( 773) 00:09:56.348 8580.215 - 8632.855: 44.6931% ( 802) 00:09:56.348 8632.855 - 8685.494: 50.6686% ( 849) 00:09:56.348 8685.494 - 8738.133: 56.6934% ( 856) 00:09:56.349 8738.133 - 8790.773: 62.4648% ( 820) 00:09:56.349 8790.773 - 8843.412: 68.1658% ( 810) 00:09:56.349 8843.412 - 8896.051: 73.4657% ( 753) 00:09:56.349 8896.051 - 8948.691: 78.3221% ( 690) 00:09:56.349 8948.691 - 9001.330: 82.4958% ( 593) 00:09:56.349 9001.330 - 9053.969: 86.1486% ( 519) 00:09:56.349 9053.969 - 9106.609: 88.8865% ( 389) 00:09:56.349 9106.609 - 9159.248: 91.1106% ( 316) 00:09:56.349 9159.248 - 9211.888: 92.7224% ( 229) 00:09:56.349 9211.888 - 9264.527: 93.9330% ( 172) 00:09:56.349 9264.527 - 9317.166: 94.8832% ( 135) 00:09:56.349 9317.166 - 9369.806: 95.6433% ( 108) 00:09:56.349 9369.806 - 9422.445: 96.2416% ( 85) 00:09:56.349 9422.445 - 9475.084: 96.6779% ( 62) 00:09:56.349 9475.084 - 9527.724: 96.9383% ( 37) 00:09:56.349 9527.724 - 9580.363: 97.1143% ( 25) 00:09:56.349 9580.363 - 9633.002: 97.2340% ( 17) 00:09:56.349 9633.002 - 9685.642: 97.3747% ( 20) 00:09:56.349 9685.642 - 9738.281: 97.5366% ( 23) 00:09:56.349 9738.281 - 9790.920: 97.6140% ( 11) 00:09:56.349 9790.920 - 9843.560: 97.7266% ( 16) 00:09:56.349 9843.560 - 9896.199: 97.8392% ( 16) 00:09:56.349 9896.199 - 9948.839: 97.9378% ( 14) 00:09:56.349 9948.839 - 10001.478: 98.0222% ( 12) 00:09:56.349 10001.478 - 10054.117: 98.1137% ( 13) 00:09:56.349 10054.117 - 10106.757: 98.2052% ( 13) 00:09:56.349 10106.757 - 10159.396: 98.3249% ( 17) 00:09:56.349 10159.396 - 10212.035: 98.4164% ( 13) 00:09:56.349 10212.035 - 10264.675: 98.4727% ( 8) 00:09:56.349 10264.675 - 10317.314: 98.5290% ( 8) 00:09:56.349 10317.314 - 10369.953: 98.6064% ( 11) 00:09:56.349 10369.953 - 10422.593: 98.6486% ( 6) 00:09:56.349 10422.593 - 10475.232: 98.7120% ( 9) 00:09:56.349 10475.232 - 10527.871: 98.7190% ( 1) 00:09:56.349 10527.871 - 10580.511: 98.7401% ( 3) 00:09:56.349 10580.511 - 10633.150: 98.7542% ( 2) 00:09:56.349 10633.150 - 10685.790: 98.7683% ( 2) 00:09:56.349 10685.790 - 10738.429: 98.7894% ( 3) 00:09:56.349 10738.429 - 10791.068: 98.8035% ( 2) 00:09:56.349 10791.068 - 10843.708: 98.8176% ( 2) 00:09:56.349 10843.708 - 10896.347: 98.8387% ( 3) 00:09:56.349 10896.347 - 10948.986: 98.8528% ( 2) 00:09:56.349 10948.986 - 11001.626: 98.8739% ( 3) 00:09:56.349 11001.626 - 11054.265: 98.8950% ( 3) 00:09:56.349 11054.265 - 11106.904: 98.9091% ( 2) 00:09:56.349 11106.904 - 11159.544: 98.9231% ( 2) 00:09:56.349 11159.544 - 11212.183: 98.9443% ( 3) 00:09:56.349 11212.183 - 11264.822: 98.9654% ( 3) 00:09:56.349 11264.822 - 11317.462: 98.9794% ( 2) 00:09:56.349 11317.462 - 11370.101: 98.9935% ( 2) 00:09:56.349 11370.101 - 11422.741: 99.0146% ( 3) 00:09:56.349 11422.741 - 11475.380: 99.0287% ( 2) 00:09:56.349 11475.380 - 11528.019: 99.0428% ( 2) 00:09:56.349 11528.019 - 11580.659: 99.0639% ( 3) 00:09:56.349 11580.659 - 11633.298: 99.0780% ( 2) 00:09:56.349 11633.298 - 11685.937: 99.0921% ( 2) 00:09:56.349 11685.937 - 11738.577: 99.0991% ( 1) 00:09:56.349 40216.469 - 40427.027: 99.1484% ( 7) 00:09:56.349 40427.027 - 40637.584: 99.2047% ( 8) 00:09:56.349 40637.584 - 40848.141: 99.2680% ( 9) 00:09:56.349 40848.141 - 41058.699: 99.3243% ( 8) 00:09:56.349 41058.699 - 41269.256: 99.3736% ( 7) 00:09:56.349 41269.256 - 41479.814: 99.4299% ( 8) 00:09:56.349 41479.814 - 41690.371: 99.4862% ( 8) 00:09:56.349 41690.371 - 41900.929: 99.5425% ( 8) 00:09:56.349 41900.929 - 42111.486: 99.5495% ( 1) 00:09:56.349 46743.749 - 46954.307: 99.5847% ( 5) 00:09:56.349 46954.307 - 47164.864: 99.6410% ( 8) 00:09:56.349 47164.864 - 47375.422: 99.6833% ( 6) 00:09:56.349 47375.422 - 47585.979: 99.7396% ( 8) 00:09:56.349 47585.979 - 47796.537: 99.7889% ( 7) 00:09:56.349 47796.537 - 48007.094: 99.8452% ( 8) 00:09:56.349 48007.094 - 48217.651: 99.9015% ( 8) 00:09:56.349 48217.651 - 48428.209: 99.9507% ( 7) 00:09:56.349 48428.209 - 48638.766: 100.0000% ( 7) 00:09:56.349 00:09:56.349 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:56.349 ============================================================================== 00:09:56.349 Range in us Cumulative IO count 00:09:56.349 7790.625 - 7843.264: 0.0141% ( 2) 00:09:56.349 7843.264 - 7895.904: 0.0352% ( 3) 00:09:56.349 7895.904 - 7948.543: 0.1971% ( 23) 00:09:56.349 7948.543 - 8001.182: 0.4716% ( 39) 00:09:56.349 8001.182 - 8053.822: 1.1332% ( 94) 00:09:56.349 8053.822 - 8106.461: 2.3438% ( 172) 00:09:56.349 8106.461 - 8159.100: 4.0189% ( 238) 00:09:56.349 8159.100 - 8211.740: 6.5175% ( 355) 00:09:56.349 8211.740 - 8264.379: 9.7973% ( 466) 00:09:56.349 8264.379 - 8317.018: 13.7950% ( 568) 00:09:56.349 8317.018 - 8369.658: 18.2362% ( 631) 00:09:56.349 8369.658 - 8422.297: 22.9378% ( 668) 00:09:56.349 8422.297 - 8474.937: 27.9772% ( 716) 00:09:56.349 8474.937 - 8527.576: 33.3826% ( 768) 00:09:56.349 8527.576 - 8580.215: 39.0695% ( 808) 00:09:56.349 8580.215 - 8632.855: 44.8057% ( 815) 00:09:56.349 8632.855 - 8685.494: 50.8446% ( 858) 00:09:56.349 8685.494 - 8738.133: 56.8271% ( 850) 00:09:56.349 8738.133 - 8790.773: 62.6056% ( 821) 00:09:56.349 8790.773 - 8843.412: 68.3066% ( 810) 00:09:56.349 8843.412 - 8896.051: 73.5220% ( 741) 00:09:56.349 8896.051 - 8948.691: 78.2376% ( 670) 00:09:56.349 8948.691 - 9001.330: 82.5099% ( 607) 00:09:56.349 9001.330 - 9053.969: 86.0220% ( 499) 00:09:56.349 9053.969 - 9106.609: 88.7035% ( 381) 00:09:56.349 9106.609 - 9159.248: 90.8713% ( 308) 00:09:56.349 9159.248 - 9211.888: 92.5324% ( 236) 00:09:56.349 9211.888 - 9264.527: 93.8626% ( 189) 00:09:56.349 9264.527 - 9317.166: 94.8902% ( 146) 00:09:56.349 9317.166 - 9369.806: 95.7418% ( 121) 00:09:56.349 9369.806 - 9422.445: 96.3260% ( 83) 00:09:56.349 9422.445 - 9475.084: 96.7061% ( 54) 00:09:56.349 9475.084 - 9527.724: 96.9313% ( 32) 00:09:56.349 9527.724 - 9580.363: 97.0721% ( 20) 00:09:56.349 9580.363 - 9633.002: 97.1847% ( 16) 00:09:56.349 9633.002 - 9685.642: 97.2903% ( 15) 00:09:56.349 9685.642 - 9738.281: 97.4029% ( 16) 00:09:56.349 9738.281 - 9790.920: 97.5155% ( 16) 00:09:56.349 9790.920 - 9843.560: 97.6211% ( 15) 00:09:56.349 9843.560 - 9896.199: 97.7407% ( 17) 00:09:56.349 9896.199 - 9948.839: 97.8392% ( 14) 00:09:56.349 9948.839 - 10001.478: 97.9307% ( 13) 00:09:56.349 10001.478 - 10054.117: 98.0152% ( 12) 00:09:56.349 10054.117 - 10106.757: 98.0926% ( 11) 00:09:56.349 10106.757 - 10159.396: 98.1630% ( 10) 00:09:56.349 10159.396 - 10212.035: 98.2404% ( 11) 00:09:56.349 10212.035 - 10264.675: 98.3108% ( 10) 00:09:56.349 10264.675 - 10317.314: 98.3812% ( 10) 00:09:56.349 10317.314 - 10369.953: 98.4586% ( 11) 00:09:56.349 10369.953 - 10422.593: 98.5290% ( 10) 00:09:56.349 10422.593 - 10475.232: 98.6135% ( 12) 00:09:56.349 10475.232 - 10527.871: 98.6416% ( 4) 00:09:56.349 10527.871 - 10580.511: 98.6486% ( 1) 00:09:56.349 10633.150 - 10685.790: 98.6627% ( 2) 00:09:56.349 10685.790 - 10738.429: 98.6768% ( 2) 00:09:56.349 10738.429 - 10791.068: 98.6909% ( 2) 00:09:56.349 10791.068 - 10843.708: 98.7050% ( 2) 00:09:56.349 10843.708 - 10896.347: 98.7261% ( 3) 00:09:56.349 10896.347 - 10948.986: 98.7401% ( 2) 00:09:56.349 10948.986 - 11001.626: 98.7683% ( 4) 00:09:56.349 11001.626 - 11054.265: 98.7824% ( 2) 00:09:56.349 11054.265 - 11106.904: 98.8035% ( 3) 00:09:56.349 11106.904 - 11159.544: 98.8105% ( 1) 00:09:56.349 11159.544 - 11212.183: 98.8316% ( 3) 00:09:56.349 11212.183 - 11264.822: 98.8457% ( 2) 00:09:56.349 11264.822 - 11317.462: 98.8668% ( 3) 00:09:56.349 11317.462 - 11370.101: 98.8809% ( 2) 00:09:56.349 11370.101 - 11422.741: 98.9020% ( 3) 00:09:56.349 11422.741 - 11475.380: 98.9161% ( 2) 00:09:56.349 11475.380 - 11528.019: 98.9302% ( 2) 00:09:56.349 11528.019 - 11580.659: 98.9443% ( 2) 00:09:56.349 11580.659 - 11633.298: 98.9654% ( 3) 00:09:56.349 11633.298 - 11685.937: 98.9794% ( 2) 00:09:56.349 11685.937 - 11738.577: 99.0006% ( 3) 00:09:56.349 11738.577 - 11791.216: 99.0146% ( 2) 00:09:56.349 11791.216 - 11843.855: 99.0287% ( 2) 00:09:56.349 11843.855 - 11896.495: 99.0428% ( 2) 00:09:56.349 11896.495 - 11949.134: 99.0639% ( 3) 00:09:56.349 11949.134 - 12001.773: 99.0780% ( 2) 00:09:56.349 12001.773 - 12054.413: 99.0991% ( 3) 00:09:56.349 38321.452 - 38532.010: 99.1554% ( 8) 00:09:56.349 38532.010 - 38742.567: 99.2117% ( 8) 00:09:56.349 38742.567 - 38953.124: 99.2610% ( 7) 00:09:56.349 38953.124 - 39163.682: 99.3173% ( 8) 00:09:56.349 39163.682 - 39374.239: 99.3666% ( 7) 00:09:56.349 39374.239 - 39584.797: 99.4229% ( 8) 00:09:56.349 39584.797 - 39795.354: 99.4721% ( 7) 00:09:56.349 39795.354 - 40005.912: 99.5284% ( 8) 00:09:56.349 40005.912 - 40216.469: 99.5495% ( 3) 00:09:56.349 44848.733 - 45059.290: 99.6059% ( 8) 00:09:56.349 45059.290 - 45269.847: 99.6551% ( 7) 00:09:56.349 45269.847 - 45480.405: 99.7044% ( 7) 00:09:56.349 45480.405 - 45690.962: 99.7677% ( 9) 00:09:56.349 45690.962 - 45901.520: 99.8170% ( 7) 00:09:56.349 45901.520 - 46112.077: 99.8733% ( 8) 00:09:56.349 46112.077 - 46322.635: 99.9296% ( 8) 00:09:56.349 46322.635 - 46533.192: 99.9789% ( 7) 00:09:56.349 46533.192 - 46743.749: 100.0000% ( 3) 00:09:56.349 00:09:56.349 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:56.349 ============================================================================== 00:09:56.349 Range in us Cumulative IO count 00:09:56.349 7843.264 - 7895.904: 0.0352% ( 5) 00:09:56.349 7895.904 - 7948.543: 0.1408% ( 15) 00:09:56.349 7948.543 - 8001.182: 0.4364% ( 42) 00:09:56.349 8001.182 - 8053.822: 1.1191% ( 97) 00:09:56.350 8053.822 - 8106.461: 2.2311% ( 158) 00:09:56.350 8106.461 - 8159.100: 3.8922% ( 236) 00:09:56.350 8159.100 - 8211.740: 6.2078% ( 329) 00:09:56.350 8211.740 - 8264.379: 9.4735% ( 464) 00:09:56.350 8264.379 - 8317.018: 13.5417% ( 578) 00:09:56.350 8317.018 - 8369.658: 17.9617% ( 628) 00:09:56.350 8369.658 - 8422.297: 22.8885% ( 700) 00:09:56.350 8422.297 - 8474.937: 28.0898% ( 739) 00:09:56.350 8474.937 - 8527.576: 33.3404% ( 746) 00:09:56.350 8527.576 - 8580.215: 38.9077% ( 791) 00:09:56.350 8580.215 - 8632.855: 44.7002% ( 823) 00:09:56.350 8632.855 - 8685.494: 50.5631% ( 833) 00:09:56.350 8685.494 - 8738.133: 56.4963% ( 843) 00:09:56.350 8738.133 - 8790.773: 62.4155% ( 841) 00:09:56.350 8790.773 - 8843.412: 68.0673% ( 803) 00:09:56.350 8843.412 - 8896.051: 73.4093% ( 759) 00:09:56.350 8896.051 - 8948.691: 78.2235% ( 684) 00:09:56.350 8948.691 - 9001.330: 82.5380% ( 613) 00:09:56.350 9001.330 - 9053.969: 86.0994% ( 506) 00:09:56.350 9053.969 - 9106.609: 88.8443% ( 390) 00:09:56.350 9106.609 - 9159.248: 90.9699% ( 302) 00:09:56.350 9159.248 - 9211.888: 92.5957% ( 231) 00:09:56.350 9211.888 - 9264.527: 93.7993% ( 171) 00:09:56.350 9264.527 - 9317.166: 94.8480% ( 149) 00:09:56.350 9317.166 - 9369.806: 95.7066% ( 122) 00:09:56.350 9369.806 - 9422.445: 96.2697% ( 80) 00:09:56.350 9422.445 - 9475.084: 96.6427% ( 53) 00:09:56.350 9475.084 - 9527.724: 96.9032% ( 37) 00:09:56.350 9527.724 - 9580.363: 97.0932% ( 27) 00:09:56.350 9580.363 - 9633.002: 97.2410% ( 21) 00:09:56.350 9633.002 - 9685.642: 97.3536% ( 16) 00:09:56.350 9685.642 - 9738.281: 97.5084% ( 22) 00:09:56.350 9738.281 - 9790.920: 97.5929% ( 12) 00:09:56.350 9790.920 - 9843.560: 97.6914% ( 14) 00:09:56.350 9843.560 - 9896.199: 97.7970% ( 15) 00:09:56.350 9896.199 - 9948.839: 97.8674% ( 10) 00:09:56.350 9948.839 - 10001.478: 97.9378% ( 10) 00:09:56.350 10001.478 - 10054.117: 98.0082% ( 10) 00:09:56.350 10054.117 - 10106.757: 98.0856% ( 11) 00:09:56.350 10106.757 - 10159.396: 98.1630% ( 11) 00:09:56.350 10159.396 - 10212.035: 98.2475% ( 12) 00:09:56.350 10212.035 - 10264.675: 98.3178% ( 10) 00:09:56.350 10264.675 - 10317.314: 98.3882% ( 10) 00:09:56.350 10317.314 - 10369.953: 98.4445% ( 8) 00:09:56.350 10369.953 - 10422.593: 98.4868% ( 6) 00:09:56.350 10422.593 - 10475.232: 98.5360% ( 7) 00:09:56.350 10475.232 - 10527.871: 98.5572% ( 3) 00:09:56.350 10527.871 - 10580.511: 98.5712% ( 2) 00:09:56.350 10580.511 - 10633.150: 98.5923% ( 3) 00:09:56.350 10633.150 - 10685.790: 98.6064% ( 2) 00:09:56.350 10685.790 - 10738.429: 98.6275% ( 3) 00:09:56.350 10738.429 - 10791.068: 98.6416% ( 2) 00:09:56.350 10791.068 - 10843.708: 98.6486% ( 1) 00:09:56.350 11001.626 - 11054.265: 98.6557% ( 1) 00:09:56.350 11054.265 - 11106.904: 98.6909% ( 5) 00:09:56.350 11106.904 - 11159.544: 98.6979% ( 1) 00:09:56.350 11159.544 - 11212.183: 98.7120% ( 2) 00:09:56.350 11212.183 - 11264.822: 98.7331% ( 3) 00:09:56.350 11264.822 - 11317.462: 98.7472% ( 2) 00:09:56.350 11317.462 - 11370.101: 98.7613% ( 2) 00:09:56.350 11370.101 - 11422.741: 98.7824% ( 3) 00:09:56.350 11422.741 - 11475.380: 98.7965% ( 2) 00:09:56.350 11475.380 - 11528.019: 98.8105% ( 2) 00:09:56.350 11528.019 - 11580.659: 98.8246% ( 2) 00:09:56.350 11580.659 - 11633.298: 98.8387% ( 2) 00:09:56.350 11633.298 - 11685.937: 98.8598% ( 3) 00:09:56.350 11685.937 - 11738.577: 98.8739% ( 2) 00:09:56.350 11738.577 - 11791.216: 98.8950% ( 3) 00:09:56.350 11791.216 - 11843.855: 98.9091% ( 2) 00:09:56.350 11843.855 - 11896.495: 98.9302% ( 3) 00:09:56.350 11896.495 - 11949.134: 98.9443% ( 2) 00:09:56.350 11949.134 - 12001.773: 98.9654% ( 3) 00:09:56.350 12001.773 - 12054.413: 98.9865% ( 3) 00:09:56.350 12054.413 - 12107.052: 99.0006% ( 2) 00:09:56.350 12107.052 - 12159.692: 99.0146% ( 2) 00:09:56.350 12159.692 - 12212.331: 99.0287% ( 2) 00:09:56.350 12212.331 - 12264.970: 99.0498% ( 3) 00:09:56.350 12264.970 - 12317.610: 99.0639% ( 2) 00:09:56.350 12317.610 - 12370.249: 99.0780% ( 2) 00:09:56.350 12370.249 - 12422.888: 99.0991% ( 3) 00:09:56.350 36215.878 - 36426.435: 99.1202% ( 3) 00:09:56.350 36426.435 - 36636.993: 99.1765% ( 8) 00:09:56.350 36636.993 - 36847.550: 99.2328% ( 8) 00:09:56.350 36847.550 - 37058.108: 99.2891% ( 8) 00:09:56.350 37058.108 - 37268.665: 99.3454% ( 8) 00:09:56.350 37268.665 - 37479.222: 99.4017% ( 8) 00:09:56.350 37479.222 - 37689.780: 99.4581% ( 8) 00:09:56.350 37689.780 - 37900.337: 99.5144% ( 8) 00:09:56.350 37900.337 - 38110.895: 99.5495% ( 5) 00:09:56.350 42743.158 - 42953.716: 99.5988% ( 7) 00:09:56.350 42953.716 - 43164.273: 99.6551% ( 8) 00:09:56.350 43164.273 - 43374.831: 99.7044% ( 7) 00:09:56.350 43374.831 - 43585.388: 99.7537% ( 7) 00:09:56.350 43585.388 - 43795.945: 99.8100% ( 8) 00:09:56.350 43795.945 - 44006.503: 99.8592% ( 7) 00:09:56.350 44006.503 - 44217.060: 99.9085% ( 7) 00:09:56.350 44217.060 - 44427.618: 99.9648% ( 8) 00:09:56.350 44427.618 - 44638.175: 100.0000% ( 5) 00:09:56.350 00:09:56.350 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:56.350 ============================================================================== 00:09:56.350 Range in us Cumulative IO count 00:09:56.350 7843.264 - 7895.904: 0.0631% ( 9) 00:09:56.350 7895.904 - 7948.543: 0.1541% ( 13) 00:09:56.350 7948.543 - 8001.182: 0.4624% ( 44) 00:09:56.350 8001.182 - 8053.822: 1.1701% ( 101) 00:09:56.350 8053.822 - 8106.461: 2.3963% ( 175) 00:09:56.350 8106.461 - 8159.100: 4.0849% ( 241) 00:09:56.350 8159.100 - 8211.740: 6.4812% ( 342) 00:09:56.350 8211.740 - 8264.379: 9.8164% ( 476) 00:09:56.350 8264.379 - 8317.018: 13.7122% ( 556) 00:09:56.350 8317.018 - 8369.658: 18.0984% ( 626) 00:09:56.350 8369.658 - 8422.297: 22.7438% ( 663) 00:09:56.350 8422.297 - 8474.937: 27.9709% ( 746) 00:09:56.350 8474.937 - 8527.576: 33.2469% ( 753) 00:09:56.350 8527.576 - 8580.215: 38.8033% ( 793) 00:09:56.350 8580.215 - 8632.855: 44.4437% ( 805) 00:09:56.350 8632.855 - 8685.494: 50.3153% ( 838) 00:09:56.350 8685.494 - 8738.133: 56.1939% ( 839) 00:09:56.350 8738.133 - 8790.773: 61.9535% ( 822) 00:09:56.350 8790.773 - 8843.412: 67.5378% ( 797) 00:09:56.350 8843.412 - 8896.051: 72.8910% ( 764) 00:09:56.350 8896.051 - 8948.691: 77.6906% ( 685) 00:09:56.350 8948.691 - 9001.330: 81.9717% ( 611) 00:09:56.350 9001.330 - 9053.969: 85.5732% ( 514) 00:09:56.350 9053.969 - 9106.609: 88.3688% ( 399) 00:09:56.350 9106.609 - 9159.248: 90.5689% ( 314) 00:09:56.350 9159.248 - 9211.888: 92.2506% ( 240) 00:09:56.350 9211.888 - 9264.527: 93.5398% ( 184) 00:09:56.350 9264.527 - 9317.166: 94.4787% ( 134) 00:09:56.350 9317.166 - 9369.806: 95.2004% ( 103) 00:09:56.350 9369.806 - 9422.445: 95.8240% ( 89) 00:09:56.350 9422.445 - 9475.084: 96.2514% ( 61) 00:09:56.350 9475.084 - 9527.724: 96.5387% ( 41) 00:09:56.350 9527.724 - 9580.363: 96.7559% ( 31) 00:09:56.350 9580.363 - 9633.002: 96.8540% ( 14) 00:09:56.350 9633.002 - 9685.642: 96.9941% ( 20) 00:09:56.350 9685.642 - 9738.281: 97.1342% ( 20) 00:09:56.350 9738.281 - 9790.920: 97.2534% ( 17) 00:09:56.350 9790.920 - 9843.560: 97.3445% ( 13) 00:09:56.350 9843.560 - 9896.199: 97.4566% ( 16) 00:09:56.350 9896.199 - 9948.839: 97.5617% ( 15) 00:09:56.350 9948.839 - 10001.478: 97.6738% ( 16) 00:09:56.350 10001.478 - 10054.117: 97.7719% ( 14) 00:09:56.350 10054.117 - 10106.757: 97.8840% ( 16) 00:09:56.350 10106.757 - 10159.396: 97.9751% ( 13) 00:09:56.350 10159.396 - 10212.035: 98.0802% ( 15) 00:09:56.350 10212.035 - 10264.675: 98.1853% ( 15) 00:09:56.350 10264.675 - 10317.314: 98.2834% ( 14) 00:09:56.350 10317.314 - 10369.953: 98.3604% ( 11) 00:09:56.350 10369.953 - 10422.593: 98.4235% ( 9) 00:09:56.350 10422.593 - 10475.232: 98.4655% ( 6) 00:09:56.350 10475.232 - 10527.871: 98.4795% ( 2) 00:09:56.350 10527.871 - 10580.511: 98.5006% ( 3) 00:09:56.350 10580.511 - 10633.150: 98.5146% ( 2) 00:09:56.350 10633.150 - 10685.790: 98.5356% ( 3) 00:09:56.350 10685.790 - 10738.429: 98.5496% ( 2) 00:09:56.350 10738.429 - 10791.068: 98.5636% ( 2) 00:09:56.350 10791.068 - 10843.708: 98.5846% ( 3) 00:09:56.350 10843.708 - 10896.347: 98.5987% ( 2) 00:09:56.350 10896.347 - 10948.986: 98.6127% ( 2) 00:09:56.350 10948.986 - 11001.626: 98.6267% ( 2) 00:09:56.350 11001.626 - 11054.265: 98.6477% ( 3) 00:09:56.350 11054.265 - 11106.904: 98.6547% ( 1) 00:09:56.350 11317.462 - 11370.101: 98.6617% ( 1) 00:09:56.350 11370.101 - 11422.741: 98.6827% ( 3) 00:09:56.350 11422.741 - 11475.380: 98.6967% ( 2) 00:09:56.350 11475.380 - 11528.019: 98.7108% ( 2) 00:09:56.350 11528.019 - 11580.659: 98.7248% ( 2) 00:09:56.350 11580.659 - 11633.298: 98.7458% ( 3) 00:09:56.350 11633.298 - 11685.937: 98.7598% ( 2) 00:09:56.350 11685.937 - 11738.577: 98.7808% ( 3) 00:09:56.350 11738.577 - 11791.216: 98.8018% ( 3) 00:09:56.350 11791.216 - 11843.855: 98.8159% ( 2) 00:09:56.350 11843.855 - 11896.495: 98.8369% ( 3) 00:09:56.350 11896.495 - 11949.134: 98.8439% ( 1) 00:09:56.350 11949.134 - 12001.773: 98.8649% ( 3) 00:09:56.350 12001.773 - 12054.413: 98.8789% ( 2) 00:09:56.350 12054.413 - 12107.052: 98.8929% ( 2) 00:09:56.350 12107.052 - 12159.692: 98.9070% ( 2) 00:09:56.350 12159.692 - 12212.331: 98.9210% ( 2) 00:09:56.350 12212.331 - 12264.970: 98.9420% ( 3) 00:09:56.350 12264.970 - 12317.610: 98.9630% ( 3) 00:09:56.351 12317.610 - 12370.249: 98.9770% ( 2) 00:09:56.351 12370.249 - 12422.888: 98.9980% ( 3) 00:09:56.351 12422.888 - 12475.528: 99.0121% ( 2) 00:09:56.351 12475.528 - 12528.167: 99.0261% ( 2) 00:09:56.351 12528.167 - 12580.806: 99.0471% ( 3) 00:09:56.351 12580.806 - 12633.446: 99.0541% ( 1) 00:09:56.351 12633.446 - 12686.085: 99.0751% ( 3) 00:09:56.351 12686.085 - 12738.724: 99.0891% ( 2) 00:09:56.351 12738.724 - 12791.364: 99.1031% ( 2) 00:09:56.351 29478.040 - 29688.598: 99.1382% ( 5) 00:09:56.351 29688.598 - 29899.155: 99.1942% ( 8) 00:09:56.351 29899.155 - 30109.712: 99.2503% ( 8) 00:09:56.351 30109.712 - 30320.270: 99.3063% ( 8) 00:09:56.351 30320.270 - 30530.827: 99.3624% ( 8) 00:09:56.351 30530.827 - 30741.385: 99.4184% ( 8) 00:09:56.351 30741.385 - 30951.942: 99.4815% ( 9) 00:09:56.351 30951.942 - 31162.500: 99.5305% ( 7) 00:09:56.351 31162.500 - 31373.057: 99.5516% ( 3) 00:09:56.351 35794.763 - 36005.320: 99.5726% ( 3) 00:09:56.351 36005.320 - 36215.878: 99.6216% ( 7) 00:09:56.351 36215.878 - 36426.435: 99.6847% ( 9) 00:09:56.351 36426.435 - 36636.993: 99.7337% ( 7) 00:09:56.351 36636.993 - 36847.550: 99.7898% ( 8) 00:09:56.351 36847.550 - 37058.108: 99.8529% ( 9) 00:09:56.351 37058.108 - 37268.665: 99.9089% ( 8) 00:09:56.351 37268.665 - 37479.222: 99.9650% ( 8) 00:09:56.351 37479.222 - 37689.780: 100.0000% ( 5) 00:09:56.351 00:09:56.351 10:21:55 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:57.732 Initializing NVMe Controllers 00:09:57.732 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:57.732 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:57.732 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:57.732 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:57.732 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:57.732 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:57.732 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:57.732 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:57.732 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:57.732 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:57.732 Initialization complete. Launching workers. 00:09:57.732 ======================================================== 00:09:57.732 Latency(us) 00:09:57.732 Device Information : IOPS MiB/s Average min max 00:09:57.732 PCIE (0000:00:10.0) NSID 1 from core 0: 13391.56 156.93 9578.94 7122.97 41562.43 00:09:57.732 PCIE (0000:00:11.0) NSID 1 from core 0: 13391.56 156.93 9564.01 7109.55 39536.81 00:09:57.732 PCIE (0000:00:13.0) NSID 1 from core 0: 13391.56 156.93 9548.70 7053.58 38818.63 00:09:57.732 PCIE (0000:00:12.0) NSID 1 from core 0: 13391.56 156.93 9533.11 7149.50 36992.67 00:09:57.732 PCIE (0000:00:12.0) NSID 2 from core 0: 13391.56 156.93 9517.61 7110.00 35542.62 00:09:57.732 PCIE (0000:00:12.0) NSID 3 from core 0: 13455.33 157.68 9457.43 7114.21 27369.34 00:09:57.732 ======================================================== 00:09:57.732 Total : 80413.15 942.34 9533.24 7053.58 41562.43 00:09:57.732 00:09:57.732 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:57.732 ================================================================================= 00:09:57.732 1.00000% : 7474.789us 00:09:57.732 10.00000% : 7843.264us 00:09:57.732 25.00000% : 8159.100us 00:09:57.732 50.00000% : 8790.773us 00:09:57.732 75.00000% : 9527.724us 00:09:57.732 90.00000% : 12317.610us 00:09:57.732 95.00000% : 14212.627us 00:09:57.732 98.00000% : 16634.037us 00:09:57.733 99.00000% : 18739.611us 00:09:57.733 99.50000% : 33478.631us 00:09:57.733 99.90000% : 41269.256us 00:09:57.733 99.99000% : 41690.371us 00:09:57.733 99.99900% : 41690.371us 00:09:57.733 99.99990% : 41690.371us 00:09:57.733 99.99999% : 41690.371us 00:09:57.733 00:09:57.733 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:57.733 ================================================================================= 00:09:57.733 1.00000% : 7474.789us 00:09:57.733 10.00000% : 7843.264us 00:09:57.733 25.00000% : 8159.100us 00:09:57.733 50.00000% : 8843.412us 00:09:57.733 75.00000% : 9422.445us 00:09:57.733 90.00000% : 12212.331us 00:09:57.733 95.00000% : 13896.790us 00:09:57.733 98.00000% : 17160.431us 00:09:57.733 99.00000% : 18950.169us 00:09:57.733 99.50000% : 32004.729us 00:09:57.733 99.90000% : 39374.239us 00:09:57.733 99.99000% : 39584.797us 00:09:57.733 99.99900% : 39584.797us 00:09:57.733 99.99990% : 39584.797us 00:09:57.733 99.99999% : 39584.797us 00:09:57.733 00:09:57.733 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:57.733 ================================================================================= 00:09:57.733 1.00000% : 7474.789us 00:09:57.733 10.00000% : 7843.264us 00:09:57.733 25.00000% : 8159.100us 00:09:57.733 50.00000% : 8843.412us 00:09:57.733 75.00000% : 9422.445us 00:09:57.733 90.00000% : 12370.249us 00:09:57.733 95.00000% : 13686.233us 00:09:57.733 98.00000% : 17370.988us 00:09:57.733 99.00000% : 18844.890us 00:09:57.733 99.50000% : 31373.057us 00:09:57.733 99.90000% : 38532.010us 00:09:57.733 99.99000% : 38953.124us 00:09:57.733 99.99900% : 38953.124us 00:09:57.733 99.99990% : 38953.124us 00:09:57.733 99.99999% : 38953.124us 00:09:57.733 00:09:57.733 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:57.733 ================================================================================= 00:09:57.733 1.00000% : 7474.789us 00:09:57.733 10.00000% : 7843.264us 00:09:57.733 25.00000% : 8211.740us 00:09:57.733 50.00000% : 8843.412us 00:09:57.733 75.00000% : 9422.445us 00:09:57.733 90.00000% : 12317.610us 00:09:57.733 95.00000% : 13791.512us 00:09:57.733 98.00000% : 17581.545us 00:09:57.733 99.00000% : 19160.726us 00:09:57.733 99.50000% : 29267.483us 00:09:57.733 99.90000% : 36847.550us 00:09:57.733 99.99000% : 37058.108us 00:09:57.733 99.99900% : 37058.108us 00:09:57.733 99.99990% : 37058.108us 00:09:57.733 99.99999% : 37058.108us 00:09:57.733 00:09:57.733 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:57.733 ================================================================================= 00:09:57.733 1.00000% : 7474.789us 00:09:57.733 10.00000% : 7790.625us 00:09:57.733 25.00000% : 8211.740us 00:09:57.733 50.00000% : 8843.412us 00:09:57.733 75.00000% : 9475.084us 00:09:57.733 90.00000% : 12264.970us 00:09:57.733 95.00000% : 14107.348us 00:09:57.733 98.00000% : 17265.709us 00:09:57.733 99.00000% : 19581.841us 00:09:57.733 99.50000% : 27793.581us 00:09:57.733 99.90000% : 35373.648us 00:09:57.733 99.99000% : 35584.206us 00:09:57.733 99.99900% : 35584.206us 00:09:57.733 99.99990% : 35584.206us 00:09:57.733 99.99999% : 35584.206us 00:09:57.733 00:09:57.733 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:57.733 ================================================================================= 00:09:57.733 1.00000% : 7474.789us 00:09:57.733 10.00000% : 7843.264us 00:09:57.733 25.00000% : 8211.740us 00:09:57.733 50.00000% : 8843.412us 00:09:57.733 75.00000% : 9475.084us 00:09:57.733 90.00000% : 12317.610us 00:09:57.733 95.00000% : 14317.905us 00:09:57.733 98.00000% : 16949.873us 00:09:57.733 99.00000% : 18634.333us 00:09:57.733 99.50000% : 19687.120us 00:09:57.733 99.90000% : 27161.908us 00:09:57.733 99.99000% : 27372.466us 00:09:57.733 99.99900% : 27372.466us 00:09:57.733 99.99990% : 27372.466us 00:09:57.733 99.99999% : 27372.466us 00:09:57.733 00:09:57.733 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:57.733 ============================================================================== 00:09:57.733 Range in us Cumulative IO count 00:09:57.733 7106.313 - 7158.953: 0.0223% ( 3) 00:09:57.733 7158.953 - 7211.592: 0.0446% ( 3) 00:09:57.733 7211.592 - 7264.231: 0.1042% ( 8) 00:09:57.733 7264.231 - 7316.871: 0.1860% ( 11) 00:09:57.733 7316.871 - 7369.510: 0.4688% ( 38) 00:09:57.733 7369.510 - 7422.149: 0.9896% ( 70) 00:09:57.733 7422.149 - 7474.789: 1.7188% ( 98) 00:09:57.733 7474.789 - 7527.428: 2.7158% ( 134) 00:09:57.733 7527.428 - 7580.067: 3.7426% ( 138) 00:09:57.733 7580.067 - 7632.707: 4.8884% ( 154) 00:09:57.733 7632.707 - 7685.346: 6.4211% ( 206) 00:09:57.733 7685.346 - 7737.986: 8.0729% ( 222) 00:09:57.733 7737.986 - 7790.625: 9.6726% ( 215) 00:09:57.733 7790.625 - 7843.264: 11.1905% ( 204) 00:09:57.733 7843.264 - 7895.904: 13.1548% ( 264) 00:09:57.733 7895.904 - 7948.543: 15.2902% ( 287) 00:09:57.733 7948.543 - 8001.182: 17.6786% ( 321) 00:09:57.733 8001.182 - 8053.822: 20.1414% ( 331) 00:09:57.733 8053.822 - 8106.461: 22.7902% ( 356) 00:09:57.733 8106.461 - 8159.100: 25.3125% ( 339) 00:09:57.733 8159.100 - 8211.740: 28.1101% ( 376) 00:09:57.733 8211.740 - 8264.379: 30.7738% ( 358) 00:09:57.733 8264.379 - 8317.018: 33.3333% ( 344) 00:09:57.733 8317.018 - 8369.658: 35.6101% ( 306) 00:09:57.733 8369.658 - 8422.297: 37.4182% ( 243) 00:09:57.733 8422.297 - 8474.937: 39.1964% ( 239) 00:09:57.733 8474.937 - 8527.576: 41.1682% ( 265) 00:09:57.733 8527.576 - 8580.215: 43.1548% ( 267) 00:09:57.733 8580.215 - 8632.855: 45.3051% ( 289) 00:09:57.733 8632.855 - 8685.494: 47.2396% ( 260) 00:09:57.733 8685.494 - 8738.133: 48.8170% ( 212) 00:09:57.733 8738.133 - 8790.773: 50.7961% ( 266) 00:09:57.733 8790.773 - 8843.412: 53.2068% ( 324) 00:09:57.733 8843.412 - 8896.051: 55.4241% ( 298) 00:09:57.733 8896.051 - 8948.691: 57.5670% ( 288) 00:09:57.733 8948.691 - 9001.330: 59.6503% ( 280) 00:09:57.733 9001.330 - 9053.969: 61.5699% ( 258) 00:09:57.733 9053.969 - 9106.609: 63.5863% ( 271) 00:09:57.733 9106.609 - 9159.248: 65.4762% ( 254) 00:09:57.733 9159.248 - 9211.888: 67.4702% ( 268) 00:09:57.733 9211.888 - 9264.527: 69.2262% ( 236) 00:09:57.733 9264.527 - 9317.166: 70.7292% ( 202) 00:09:57.733 9317.166 - 9369.806: 72.2991% ( 211) 00:09:57.733 9369.806 - 9422.445: 73.6458% ( 181) 00:09:57.733 9422.445 - 9475.084: 74.7768% ( 152) 00:09:57.733 9475.084 - 9527.724: 75.4315% ( 88) 00:09:57.733 9527.724 - 9580.363: 76.1905% ( 102) 00:09:57.733 9580.363 - 9633.002: 76.9866% ( 107) 00:09:57.733 9633.002 - 9685.642: 77.7604% ( 104) 00:09:57.733 9685.642 - 9738.281: 78.2440% ( 65) 00:09:57.733 9738.281 - 9790.920: 78.9062% ( 89) 00:09:57.733 9790.920 - 9843.560: 79.3304% ( 57) 00:09:57.733 9843.560 - 9896.199: 79.7470% ( 56) 00:09:57.733 9896.199 - 9948.839: 80.0595% ( 42) 00:09:57.733 9948.839 - 10001.478: 80.2679% ( 28) 00:09:57.733 10001.478 - 10054.117: 80.5432% ( 37) 00:09:57.733 10054.117 - 10106.757: 80.8854% ( 46) 00:09:57.733 10106.757 - 10159.396: 81.2128% ( 44) 00:09:57.733 10159.396 - 10212.035: 81.4360% ( 30) 00:09:57.733 10212.035 - 10264.675: 81.6146% ( 24) 00:09:57.733 10264.675 - 10317.314: 81.7857% ( 23) 00:09:57.733 10317.314 - 10369.953: 81.9196% ( 18) 00:09:57.733 10369.953 - 10422.593: 82.0089% ( 12) 00:09:57.733 10422.593 - 10475.232: 82.0908% ( 11) 00:09:57.733 10475.232 - 10527.871: 82.2396% ( 20) 00:09:57.733 10527.871 - 10580.511: 82.5149% ( 37) 00:09:57.733 10580.511 - 10633.150: 82.9464% ( 58) 00:09:57.733 10633.150 - 10685.790: 83.1920% ( 33) 00:09:57.733 10685.790 - 10738.429: 83.4524% ( 35) 00:09:57.733 10738.429 - 10791.068: 83.6533% ( 27) 00:09:57.733 10791.068 - 10843.708: 83.9286% ( 37) 00:09:57.733 10843.708 - 10896.347: 84.2188% ( 39) 00:09:57.733 10896.347 - 10948.986: 84.5685% ( 47) 00:09:57.733 10948.986 - 11001.626: 84.9330% ( 49) 00:09:57.733 11001.626 - 11054.265: 85.0893% ( 21) 00:09:57.733 11054.265 - 11106.904: 85.2902% ( 27) 00:09:57.733 11106.904 - 11159.544: 85.4762% ( 25) 00:09:57.733 11159.544 - 11212.183: 85.6994% ( 30) 00:09:57.733 11212.183 - 11264.822: 85.8557% ( 21) 00:09:57.733 11264.822 - 11317.462: 86.1533% ( 40) 00:09:57.733 11317.462 - 11370.101: 86.3318% ( 24) 00:09:57.733 11370.101 - 11422.741: 86.5030% ( 23) 00:09:57.733 11422.741 - 11475.380: 86.6592% ( 21) 00:09:57.733 11475.380 - 11528.019: 86.9048% ( 33) 00:09:57.733 11528.019 - 11580.659: 87.1280% ( 30) 00:09:57.733 11580.659 - 11633.298: 87.3214% ( 26) 00:09:57.733 11633.298 - 11685.937: 87.5223% ( 27) 00:09:57.734 11685.937 - 11738.577: 87.9539% ( 58) 00:09:57.734 11738.577 - 11791.216: 88.3259% ( 50) 00:09:57.734 11791.216 - 11843.855: 88.5119% ( 25) 00:09:57.734 11843.855 - 11896.495: 88.7872% ( 37) 00:09:57.734 11896.495 - 11949.134: 88.9732% ( 25) 00:09:57.734 11949.134 - 12001.773: 89.1518% ( 24) 00:09:57.734 12001.773 - 12054.413: 89.3676% ( 29) 00:09:57.734 12054.413 - 12107.052: 89.5164% ( 20) 00:09:57.734 12107.052 - 12159.692: 89.5982% ( 11) 00:09:57.734 12159.692 - 12212.331: 89.7321% ( 18) 00:09:57.734 12212.331 - 12264.970: 89.8512% ( 16) 00:09:57.734 12264.970 - 12317.610: 90.0446% ( 26) 00:09:57.734 12317.610 - 12370.249: 90.3051% ( 35) 00:09:57.734 12370.249 - 12422.888: 90.5729% ( 36) 00:09:57.734 12422.888 - 12475.528: 90.9077% ( 45) 00:09:57.734 12475.528 - 12528.167: 91.3095% ( 54) 00:09:57.734 12528.167 - 12580.806: 91.7113% ( 54) 00:09:57.734 12580.806 - 12633.446: 92.0610% ( 47) 00:09:57.734 12633.446 - 12686.085: 92.3065% ( 33) 00:09:57.734 12686.085 - 12738.724: 92.5744% ( 36) 00:09:57.734 12738.724 - 12791.364: 92.7009% ( 17) 00:09:57.734 12791.364 - 12844.003: 92.8497% ( 20) 00:09:57.734 12844.003 - 12896.643: 93.0729% ( 30) 00:09:57.734 12896.643 - 12949.282: 93.1994% ( 17) 00:09:57.734 12949.282 - 13001.921: 93.3557% ( 21) 00:09:57.734 13001.921 - 13054.561: 93.5789% ( 30) 00:09:57.734 13054.561 - 13107.200: 93.6607% ( 11) 00:09:57.734 13107.200 - 13159.839: 93.7500% ( 12) 00:09:57.734 13159.839 - 13212.479: 93.8542% ( 14) 00:09:57.734 13212.479 - 13265.118: 93.9509% ( 13) 00:09:57.734 13265.118 - 13317.757: 94.0253% ( 10) 00:09:57.734 13317.757 - 13370.397: 94.0699% ( 6) 00:09:57.734 13370.397 - 13423.036: 94.1741% ( 14) 00:09:57.734 13423.036 - 13475.676: 94.3006% ( 17) 00:09:57.734 13475.676 - 13580.954: 94.3973% ( 13) 00:09:57.734 13580.954 - 13686.233: 94.4866% ( 12) 00:09:57.734 13686.233 - 13791.512: 94.7321% ( 33) 00:09:57.734 13791.512 - 13896.790: 94.8065% ( 10) 00:09:57.734 13896.790 - 14002.069: 94.9033% ( 13) 00:09:57.734 14002.069 - 14107.348: 94.9702% ( 9) 00:09:57.734 14107.348 - 14212.627: 95.0149% ( 6) 00:09:57.734 14212.627 - 14317.905: 95.0818% ( 9) 00:09:57.734 14317.905 - 14423.184: 95.0967% ( 2) 00:09:57.734 14423.184 - 14528.463: 95.1190% ( 3) 00:09:57.734 14528.463 - 14633.741: 95.1711% ( 7) 00:09:57.734 14633.741 - 14739.020: 95.3720% ( 27) 00:09:57.734 14739.020 - 14844.299: 95.5432% ( 23) 00:09:57.734 14844.299 - 14949.578: 95.7143% ( 23) 00:09:57.734 14949.578 - 15054.856: 95.8631% ( 20) 00:09:57.734 15054.856 - 15160.135: 96.0268% ( 22) 00:09:57.734 15160.135 - 15265.414: 96.0789% ( 7) 00:09:57.734 15265.414 - 15370.692: 96.1682% ( 12) 00:09:57.734 15370.692 - 15475.971: 96.2426% ( 10) 00:09:57.734 15475.971 - 15581.250: 96.4955% ( 34) 00:09:57.734 15581.250 - 15686.529: 96.7560% ( 35) 00:09:57.734 15686.529 - 15791.807: 96.8899% ( 18) 00:09:57.734 15791.807 - 15897.086: 97.0461% ( 21) 00:09:57.734 15897.086 - 16002.365: 97.2247% ( 24) 00:09:57.734 16002.365 - 16107.643: 97.3958% ( 23) 00:09:57.734 16107.643 - 16212.922: 97.6637% ( 36) 00:09:57.734 16212.922 - 16318.201: 97.8274% ( 22) 00:09:57.734 16318.201 - 16423.480: 97.9688% ( 19) 00:09:57.734 16423.480 - 16528.758: 97.9985% ( 4) 00:09:57.734 16528.758 - 16634.037: 98.0506% ( 7) 00:09:57.734 16634.037 - 16739.316: 98.0878% ( 5) 00:09:57.734 16739.316 - 16844.594: 98.0952% ( 1) 00:09:57.734 17265.709 - 17370.988: 98.1027% ( 1) 00:09:57.734 17476.267 - 17581.545: 98.2217% ( 16) 00:09:57.734 17581.545 - 17686.824: 98.2961% ( 10) 00:09:57.734 17686.824 - 17792.103: 98.3408% ( 6) 00:09:57.734 17792.103 - 17897.382: 98.4375% ( 13) 00:09:57.734 17897.382 - 18002.660: 98.5268% ( 12) 00:09:57.734 18002.660 - 18107.939: 98.6012% ( 10) 00:09:57.734 18107.939 - 18213.218: 98.6830% ( 11) 00:09:57.734 18213.218 - 18318.496: 98.8244% ( 19) 00:09:57.734 18318.496 - 18423.775: 98.8765% ( 7) 00:09:57.734 18423.775 - 18529.054: 98.9137% ( 5) 00:09:57.734 18529.054 - 18634.333: 98.9509% ( 5) 00:09:57.734 18634.333 - 18739.611: 99.0104% ( 8) 00:09:57.734 18739.611 - 18844.890: 99.0476% ( 5) 00:09:57.734 31794.172 - 32004.729: 99.0699% ( 3) 00:09:57.734 32004.729 - 32215.287: 99.1741% ( 14) 00:09:57.734 32215.287 - 32425.844: 99.2485% ( 10) 00:09:57.734 32425.844 - 32636.402: 99.3304% ( 11) 00:09:57.734 32636.402 - 32846.959: 99.3824% ( 7) 00:09:57.734 32846.959 - 33057.516: 99.4048% ( 3) 00:09:57.734 33057.516 - 33268.074: 99.4568% ( 7) 00:09:57.734 33268.074 - 33478.631: 99.5164% ( 8) 00:09:57.734 33478.631 - 33689.189: 99.5238% ( 1) 00:09:57.734 39584.797 - 39795.354: 99.5685% ( 6) 00:09:57.734 39795.354 - 40005.912: 99.6205% ( 7) 00:09:57.734 40005.912 - 40216.469: 99.6726% ( 7) 00:09:57.734 40216.469 - 40427.027: 99.7173% ( 6) 00:09:57.734 40427.027 - 40637.584: 99.7768% ( 8) 00:09:57.734 40637.584 - 40848.141: 99.8289% ( 7) 00:09:57.734 40848.141 - 41058.699: 99.8735% ( 6) 00:09:57.734 41058.699 - 41269.256: 99.9330% ( 8) 00:09:57.734 41269.256 - 41479.814: 99.9851% ( 7) 00:09:57.734 41479.814 - 41690.371: 100.0000% ( 2) 00:09:57.734 00:09:57.734 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:57.734 ============================================================================== 00:09:57.734 Range in us Cumulative IO count 00:09:57.734 7106.313 - 7158.953: 0.0074% ( 1) 00:09:57.734 7158.953 - 7211.592: 0.0149% ( 1) 00:09:57.734 7211.592 - 7264.231: 0.0670% ( 7) 00:09:57.734 7264.231 - 7316.871: 0.2009% ( 18) 00:09:57.734 7316.871 - 7369.510: 0.4464% ( 33) 00:09:57.734 7369.510 - 7422.149: 0.9673% ( 70) 00:09:57.734 7422.149 - 7474.789: 1.5327% ( 76) 00:09:57.734 7474.789 - 7527.428: 2.2470% ( 96) 00:09:57.734 7527.428 - 7580.067: 3.0506% ( 108) 00:09:57.734 7580.067 - 7632.707: 4.0699% ( 137) 00:09:57.734 7632.707 - 7685.346: 5.2009% ( 152) 00:09:57.734 7685.346 - 7737.986: 6.3393% ( 153) 00:09:57.734 7737.986 - 7790.625: 8.1548% ( 244) 00:09:57.734 7790.625 - 7843.264: 10.0818% ( 259) 00:09:57.734 7843.264 - 7895.904: 11.7783% ( 228) 00:09:57.734 7895.904 - 7948.543: 14.2411% ( 331) 00:09:57.734 7948.543 - 8001.182: 17.2321% ( 402) 00:09:57.734 8001.182 - 8053.822: 19.8289% ( 349) 00:09:57.734 8053.822 - 8106.461: 22.6711% ( 382) 00:09:57.734 8106.461 - 8159.100: 25.2381% ( 345) 00:09:57.734 8159.100 - 8211.740: 28.3631% ( 420) 00:09:57.734 8211.740 - 8264.379: 30.9301% ( 345) 00:09:57.734 8264.379 - 8317.018: 33.3854% ( 330) 00:09:57.734 8317.018 - 8369.658: 35.6622% ( 306) 00:09:57.734 8369.658 - 8422.297: 37.4405% ( 239) 00:09:57.734 8422.297 - 8474.937: 39.2634% ( 245) 00:09:57.734 8474.937 - 8527.576: 40.9821% ( 231) 00:09:57.734 8527.576 - 8580.215: 42.3810% ( 188) 00:09:57.734 8580.215 - 8632.855: 44.0551% ( 225) 00:09:57.734 8632.855 - 8685.494: 45.9301% ( 252) 00:09:57.734 8685.494 - 8738.133: 47.7827% ( 249) 00:09:57.734 8738.133 - 8790.773: 49.6057% ( 245) 00:09:57.734 8790.773 - 8843.412: 51.6146% ( 270) 00:09:57.734 8843.412 - 8896.051: 53.6384% ( 272) 00:09:57.734 8896.051 - 8948.691: 55.9152% ( 306) 00:09:57.734 8948.691 - 9001.330: 58.3780% ( 331) 00:09:57.734 9001.330 - 9053.969: 61.0045% ( 353) 00:09:57.734 9053.969 - 9106.609: 63.5565% ( 343) 00:09:57.734 9106.609 - 9159.248: 66.6220% ( 412) 00:09:57.734 9159.248 - 9211.888: 69.1964% ( 346) 00:09:57.734 9211.888 - 9264.527: 71.4062% ( 297) 00:09:57.734 9264.527 - 9317.166: 73.1176% ( 230) 00:09:57.734 9317.166 - 9369.806: 74.6503% ( 206) 00:09:57.734 9369.806 - 9422.445: 75.6473% ( 134) 00:09:57.734 9422.445 - 9475.084: 76.4286% ( 105) 00:09:57.734 9475.084 - 9527.724: 77.1205% ( 93) 00:09:57.734 9527.724 - 9580.363: 77.8199% ( 94) 00:09:57.734 9580.363 - 9633.002: 78.2812% ( 62) 00:09:57.734 9633.002 - 9685.642: 78.6830% ( 54) 00:09:57.734 9685.642 - 9738.281: 79.1518% ( 63) 00:09:57.734 9738.281 - 9790.920: 79.6429% ( 66) 00:09:57.734 9790.920 - 9843.560: 79.9777% ( 45) 00:09:57.734 9843.560 - 9896.199: 80.3125% ( 45) 00:09:57.734 9896.199 - 9948.839: 80.5580% ( 33) 00:09:57.734 9948.839 - 10001.478: 80.9301% ( 50) 00:09:57.734 10001.478 - 10054.117: 81.1235% ( 26) 00:09:57.734 10054.117 - 10106.757: 81.2649% ( 19) 00:09:57.734 10106.757 - 10159.396: 81.3988% ( 18) 00:09:57.734 10159.396 - 10212.035: 81.5253% ( 17) 00:09:57.734 10212.035 - 10264.675: 81.6890% ( 22) 00:09:57.734 10264.675 - 10317.314: 81.7708% ( 11) 00:09:57.734 10317.314 - 10369.953: 81.8304% ( 8) 00:09:57.734 10369.953 - 10422.593: 81.9048% ( 10) 00:09:57.734 10422.593 - 10475.232: 82.0461% ( 19) 00:09:57.734 10475.232 - 10527.871: 82.2098% ( 22) 00:09:57.734 10527.871 - 10580.511: 82.3363% ( 17) 00:09:57.734 10580.511 - 10633.150: 82.5074% ( 23) 00:09:57.734 10633.150 - 10685.790: 82.7083% ( 27) 00:09:57.734 10685.790 - 10738.429: 83.0208% ( 42) 00:09:57.734 10738.429 - 10791.068: 83.3333% ( 42) 00:09:57.734 10791.068 - 10843.708: 83.5342% ( 27) 00:09:57.734 10843.708 - 10896.347: 83.7054% ( 23) 00:09:57.734 10896.347 - 10948.986: 83.8914% ( 25) 00:09:57.734 10948.986 - 11001.626: 84.1518% ( 35) 00:09:57.734 11001.626 - 11054.265: 84.4345% ( 38) 00:09:57.734 11054.265 - 11106.904: 84.6726% ( 32) 00:09:57.734 11106.904 - 11159.544: 84.8363% ( 22) 00:09:57.734 11159.544 - 11212.183: 85.0074% ( 23) 00:09:57.734 11212.183 - 11264.822: 85.2083% ( 27) 00:09:57.735 11264.822 - 11317.462: 85.4018% ( 26) 00:09:57.735 11317.462 - 11370.101: 85.6473% ( 33) 00:09:57.735 11370.101 - 11422.741: 85.8185% ( 23) 00:09:57.735 11422.741 - 11475.380: 86.0045% ( 25) 00:09:57.735 11475.380 - 11528.019: 86.1682% ( 22) 00:09:57.735 11528.019 - 11580.659: 86.3616% ( 26) 00:09:57.735 11580.659 - 11633.298: 86.4955% ( 18) 00:09:57.735 11633.298 - 11685.937: 86.8527% ( 48) 00:09:57.735 11685.937 - 11738.577: 87.1801% ( 44) 00:09:57.735 11738.577 - 11791.216: 87.6190% ( 59) 00:09:57.735 11791.216 - 11843.855: 87.9688% ( 47) 00:09:57.735 11843.855 - 11896.495: 88.2887% ( 43) 00:09:57.735 11896.495 - 11949.134: 88.5789% ( 39) 00:09:57.735 11949.134 - 12001.773: 88.8393% ( 35) 00:09:57.735 12001.773 - 12054.413: 89.0923% ( 34) 00:09:57.735 12054.413 - 12107.052: 89.4940% ( 54) 00:09:57.735 12107.052 - 12159.692: 89.8363% ( 46) 00:09:57.735 12159.692 - 12212.331: 90.2232% ( 52) 00:09:57.735 12212.331 - 12264.970: 90.6473% ( 57) 00:09:57.735 12264.970 - 12317.610: 90.9673% ( 43) 00:09:57.735 12317.610 - 12370.249: 91.2202% ( 34) 00:09:57.735 12370.249 - 12422.888: 91.4435% ( 30) 00:09:57.735 12422.888 - 12475.528: 91.6667% ( 30) 00:09:57.735 12475.528 - 12528.167: 91.8378% ( 23) 00:09:57.735 12528.167 - 12580.806: 92.0461% ( 28) 00:09:57.735 12580.806 - 12633.446: 92.1875% ( 19) 00:09:57.735 12633.446 - 12686.085: 92.3363% ( 20) 00:09:57.735 12686.085 - 12738.724: 92.4777% ( 19) 00:09:57.735 12738.724 - 12791.364: 92.6488% ( 23) 00:09:57.735 12791.364 - 12844.003: 92.7976% ( 20) 00:09:57.735 12844.003 - 12896.643: 92.9464% ( 20) 00:09:57.735 12896.643 - 12949.282: 93.1696% ( 30) 00:09:57.735 12949.282 - 13001.921: 93.3557% ( 25) 00:09:57.735 13001.921 - 13054.561: 93.5342% ( 24) 00:09:57.735 13054.561 - 13107.200: 93.7649% ( 31) 00:09:57.735 13107.200 - 13159.839: 93.9435% ( 24) 00:09:57.735 13159.839 - 13212.479: 94.1220% ( 24) 00:09:57.735 13212.479 - 13265.118: 94.2485% ( 17) 00:09:57.735 13265.118 - 13317.757: 94.4122% ( 22) 00:09:57.735 13317.757 - 13370.397: 94.4866% ( 10) 00:09:57.735 13370.397 - 13423.036: 94.5536% ( 9) 00:09:57.735 13423.036 - 13475.676: 94.6280% ( 10) 00:09:57.735 13475.676 - 13580.954: 94.7693% ( 19) 00:09:57.735 13580.954 - 13686.233: 94.8884% ( 16) 00:09:57.735 13686.233 - 13791.512: 94.9628% ( 10) 00:09:57.735 13791.512 - 13896.790: 95.0372% ( 10) 00:09:57.735 13896.790 - 14002.069: 95.1190% ( 11) 00:09:57.735 14002.069 - 14107.348: 95.1414% ( 3) 00:09:57.735 14107.348 - 14212.627: 95.1711% ( 4) 00:09:57.735 14212.627 - 14317.905: 95.2009% ( 4) 00:09:57.735 14317.905 - 14423.184: 95.2604% ( 8) 00:09:57.735 14423.184 - 14528.463: 95.3497% ( 12) 00:09:57.735 14528.463 - 14633.741: 95.4092% ( 8) 00:09:57.735 14633.741 - 14739.020: 95.5804% ( 23) 00:09:57.735 14739.020 - 14844.299: 95.8259% ( 33) 00:09:57.735 14844.299 - 14949.578: 96.1533% ( 44) 00:09:57.735 14949.578 - 15054.856: 96.3839% ( 31) 00:09:57.735 15054.856 - 15160.135: 96.5476% ( 22) 00:09:57.735 15160.135 - 15265.414: 96.6369% ( 12) 00:09:57.735 15265.414 - 15370.692: 96.7411% ( 14) 00:09:57.735 15370.692 - 15475.971: 96.8155% ( 10) 00:09:57.735 15475.971 - 15581.250: 96.8824% ( 9) 00:09:57.735 15581.250 - 15686.529: 96.9940% ( 15) 00:09:57.735 15686.529 - 15791.807: 97.1057% ( 15) 00:09:57.735 15791.807 - 15897.086: 97.1354% ( 4) 00:09:57.735 15897.086 - 16002.365: 97.1429% ( 1) 00:09:57.735 16107.643 - 16212.922: 97.1801% ( 5) 00:09:57.735 16212.922 - 16318.201: 97.2768% ( 13) 00:09:57.735 16318.201 - 16423.480: 97.4330% ( 21) 00:09:57.735 16423.480 - 16528.758: 97.5223% ( 12) 00:09:57.735 16528.758 - 16634.037: 97.5744% ( 7) 00:09:57.735 16634.037 - 16739.316: 97.6116% ( 5) 00:09:57.735 16739.316 - 16844.594: 97.6562% ( 6) 00:09:57.735 16844.594 - 16949.873: 97.7679% ( 15) 00:09:57.735 16949.873 - 17055.152: 97.9688% ( 27) 00:09:57.735 17055.152 - 17160.431: 98.0729% ( 14) 00:09:57.735 17160.431 - 17265.709: 98.1176% ( 6) 00:09:57.735 17265.709 - 17370.988: 98.1994% ( 11) 00:09:57.735 17370.988 - 17476.267: 98.2812% ( 11) 00:09:57.735 17476.267 - 17581.545: 98.3110% ( 4) 00:09:57.735 17581.545 - 17686.824: 98.3408% ( 4) 00:09:57.735 17686.824 - 17792.103: 98.3854% ( 6) 00:09:57.735 17792.103 - 17897.382: 98.4301% ( 6) 00:09:57.735 17897.382 - 18002.660: 98.5045% ( 10) 00:09:57.735 18002.660 - 18107.939: 98.6830% ( 24) 00:09:57.735 18107.939 - 18213.218: 98.7574% ( 10) 00:09:57.735 18213.218 - 18318.496: 98.7946% ( 5) 00:09:57.735 18318.496 - 18423.775: 98.8244% ( 4) 00:09:57.735 18423.775 - 18529.054: 98.8542% ( 4) 00:09:57.735 18529.054 - 18634.333: 98.8914% ( 5) 00:09:57.735 18634.333 - 18739.611: 98.9360% ( 6) 00:09:57.735 18739.611 - 18844.890: 98.9807% ( 6) 00:09:57.735 18844.890 - 18950.169: 99.0179% ( 5) 00:09:57.735 18950.169 - 19055.447: 99.0476% ( 4) 00:09:57.735 30109.712 - 30320.270: 99.0625% ( 2) 00:09:57.735 30320.270 - 30530.827: 99.1146% ( 7) 00:09:57.735 30530.827 - 30741.385: 99.1741% ( 8) 00:09:57.735 30741.385 - 30951.942: 99.2336% ( 8) 00:09:57.735 30951.942 - 31162.500: 99.2932% ( 8) 00:09:57.735 31162.500 - 31373.057: 99.3527% ( 8) 00:09:57.735 31373.057 - 31583.614: 99.4122% ( 8) 00:09:57.735 31583.614 - 31794.172: 99.4717% ( 8) 00:09:57.735 31794.172 - 32004.729: 99.5238% ( 7) 00:09:57.735 37689.780 - 37900.337: 99.5536% ( 4) 00:09:57.735 37900.337 - 38110.895: 99.6131% ( 8) 00:09:57.735 38110.895 - 38321.452: 99.6652% ( 7) 00:09:57.735 38321.452 - 38532.010: 99.7173% ( 7) 00:09:57.735 38532.010 - 38742.567: 99.7768% ( 8) 00:09:57.735 38742.567 - 38953.124: 99.8363% ( 8) 00:09:57.735 38953.124 - 39163.682: 99.8958% ( 8) 00:09:57.735 39163.682 - 39374.239: 99.9554% ( 8) 00:09:57.735 39374.239 - 39584.797: 100.0000% ( 6) 00:09:57.735 00:09:57.735 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:57.735 ============================================================================== 00:09:57.735 Range in us Cumulative IO count 00:09:57.735 7001.035 - 7053.674: 0.0074% ( 1) 00:09:57.735 7211.592 - 7264.231: 0.0818% ( 10) 00:09:57.735 7264.231 - 7316.871: 0.1637% ( 11) 00:09:57.735 7316.871 - 7369.510: 0.3571% ( 26) 00:09:57.735 7369.510 - 7422.149: 0.7143% ( 48) 00:09:57.735 7422.149 - 7474.789: 1.1161% ( 54) 00:09:57.735 7474.789 - 7527.428: 1.7932% ( 91) 00:09:57.735 7527.428 - 7580.067: 2.7753% ( 132) 00:09:57.735 7580.067 - 7632.707: 3.9583% ( 159) 00:09:57.735 7632.707 - 7685.346: 5.3125% ( 182) 00:09:57.735 7685.346 - 7737.986: 7.5893% ( 306) 00:09:57.735 7737.986 - 7790.625: 9.8214% ( 300) 00:09:57.735 7790.625 - 7843.264: 12.0312% ( 297) 00:09:57.735 7843.264 - 7895.904: 14.5312% ( 336) 00:09:57.735 7895.904 - 7948.543: 16.7411% ( 297) 00:09:57.735 7948.543 - 8001.182: 18.7946% ( 276) 00:09:57.735 8001.182 - 8053.822: 20.8110% ( 271) 00:09:57.735 8053.822 - 8106.461: 22.7307% ( 258) 00:09:57.735 8106.461 - 8159.100: 25.1637% ( 327) 00:09:57.735 8159.100 - 8211.740: 27.7902% ( 353) 00:09:57.735 8211.740 - 8264.379: 30.2158% ( 326) 00:09:57.735 8264.379 - 8317.018: 33.2217% ( 404) 00:09:57.735 8317.018 - 8369.658: 35.7738% ( 343) 00:09:57.735 8369.658 - 8422.297: 37.9315% ( 290) 00:09:57.735 8422.297 - 8474.937: 39.7321% ( 242) 00:09:57.735 8474.937 - 8527.576: 41.2500% ( 204) 00:09:57.735 8527.576 - 8580.215: 42.7083% ( 196) 00:09:57.735 8580.215 - 8632.855: 44.0699% ( 183) 00:09:57.735 8632.855 - 8685.494: 45.3051% ( 166) 00:09:57.735 8685.494 - 8738.133: 46.7336% ( 192) 00:09:57.735 8738.133 - 8790.773: 48.2143% ( 199) 00:09:57.735 8790.773 - 8843.412: 50.0744% ( 250) 00:09:57.735 8843.412 - 8896.051: 52.0164% ( 261) 00:09:57.735 8896.051 - 8948.691: 54.4568% ( 328) 00:09:57.735 8948.691 - 9001.330: 57.3363% ( 387) 00:09:57.735 9001.330 - 9053.969: 60.2530% ( 392) 00:09:57.735 9053.969 - 9106.609: 63.6905% ( 462) 00:09:57.735 9106.609 - 9159.248: 66.6071% ( 392) 00:09:57.735 9159.248 - 9211.888: 69.2932% ( 361) 00:09:57.735 9211.888 - 9264.527: 71.5179% ( 299) 00:09:57.735 9264.527 - 9317.166: 73.1176% ( 215) 00:09:57.735 9317.166 - 9369.806: 74.2857% ( 157) 00:09:57.735 9369.806 - 9422.445: 75.3199% ( 139) 00:09:57.735 9422.445 - 9475.084: 76.0565% ( 99) 00:09:57.735 9475.084 - 9527.724: 76.8229% ( 103) 00:09:57.735 9527.724 - 9580.363: 77.4554% ( 85) 00:09:57.735 9580.363 - 9633.002: 77.9092% ( 61) 00:09:57.735 9633.002 - 9685.642: 78.3110% ( 54) 00:09:57.735 9685.642 - 9738.281: 78.8542% ( 73) 00:09:57.735 9738.281 - 9790.920: 79.2336% ( 51) 00:09:57.735 9790.920 - 9843.560: 79.5610% ( 44) 00:09:57.735 9843.560 - 9896.199: 79.8512% ( 39) 00:09:57.735 9896.199 - 9948.839: 80.3497% ( 67) 00:09:57.735 9948.839 - 10001.478: 80.6250% ( 37) 00:09:57.735 10001.478 - 10054.117: 81.0193% ( 53) 00:09:57.735 10054.117 - 10106.757: 81.2723% ( 34) 00:09:57.735 10106.757 - 10159.396: 81.5699% ( 40) 00:09:57.735 10159.396 - 10212.035: 81.9420% ( 50) 00:09:57.735 10212.035 - 10264.675: 82.4628% ( 70) 00:09:57.735 10264.675 - 10317.314: 82.9167% ( 61) 00:09:57.735 10317.314 - 10369.953: 83.2738% ( 48) 00:09:57.735 10369.953 - 10422.593: 83.7426% ( 63) 00:09:57.736 10422.593 - 10475.232: 83.9137% ( 23) 00:09:57.736 10475.232 - 10527.871: 84.0179% ( 14) 00:09:57.736 10527.871 - 10580.511: 84.1071% ( 12) 00:09:57.736 10580.511 - 10633.150: 84.1815% ( 10) 00:09:57.736 10633.150 - 10685.790: 84.2336% ( 7) 00:09:57.736 10685.790 - 10738.429: 84.2932% ( 8) 00:09:57.736 10738.429 - 10791.068: 84.3229% ( 4) 00:09:57.736 10791.068 - 10843.708: 84.3452% ( 3) 00:09:57.736 10843.708 - 10896.347: 84.3676% ( 3) 00:09:57.736 10896.347 - 10948.986: 84.3899% ( 3) 00:09:57.736 10948.986 - 11001.626: 84.4643% ( 10) 00:09:57.736 11001.626 - 11054.265: 84.5312% ( 9) 00:09:57.736 11054.265 - 11106.904: 84.6131% ( 11) 00:09:57.736 11106.904 - 11159.544: 84.7396% ( 17) 00:09:57.736 11159.544 - 11212.183: 84.8512% ( 15) 00:09:57.736 11212.183 - 11264.822: 85.0298% ( 24) 00:09:57.736 11264.822 - 11317.462: 85.2455% ( 29) 00:09:57.736 11317.462 - 11370.101: 85.4762% ( 31) 00:09:57.736 11370.101 - 11422.741: 85.8780% ( 54) 00:09:57.736 11422.741 - 11475.380: 86.1161% ( 32) 00:09:57.736 11475.380 - 11528.019: 86.4658% ( 47) 00:09:57.736 11528.019 - 11580.659: 86.7262% ( 35) 00:09:57.736 11580.659 - 11633.298: 86.9122% ( 25) 00:09:57.736 11633.298 - 11685.937: 87.0982% ( 25) 00:09:57.736 11685.937 - 11738.577: 87.2470% ( 20) 00:09:57.736 11738.577 - 11791.216: 87.4330% ( 25) 00:09:57.736 11791.216 - 11843.855: 87.5670% ( 18) 00:09:57.736 11843.855 - 11896.495: 87.7679% ( 27) 00:09:57.736 11896.495 - 11949.134: 88.1622% ( 53) 00:09:57.736 11949.134 - 12001.773: 88.4970% ( 45) 00:09:57.736 12001.773 - 12054.413: 88.5863% ( 12) 00:09:57.736 12054.413 - 12107.052: 88.7128% ( 17) 00:09:57.736 12107.052 - 12159.692: 88.9732% ( 35) 00:09:57.736 12159.692 - 12212.331: 89.2411% ( 36) 00:09:57.736 12212.331 - 12264.970: 89.5461% ( 41) 00:09:57.736 12264.970 - 12317.610: 89.9256% ( 51) 00:09:57.736 12317.610 - 12370.249: 90.3869% ( 62) 00:09:57.736 12370.249 - 12422.888: 90.7738% ( 52) 00:09:57.736 12422.888 - 12475.528: 91.2500% ( 64) 00:09:57.736 12475.528 - 12528.167: 91.5327% ( 38) 00:09:57.736 12528.167 - 12580.806: 91.7485% ( 29) 00:09:57.736 12580.806 - 12633.446: 92.0312% ( 38) 00:09:57.736 12633.446 - 12686.085: 92.2768% ( 33) 00:09:57.736 12686.085 - 12738.724: 92.5000% ( 30) 00:09:57.736 12738.724 - 12791.364: 92.7604% ( 35) 00:09:57.736 12791.364 - 12844.003: 92.9985% ( 32) 00:09:57.736 12844.003 - 12896.643: 93.3780% ( 51) 00:09:57.736 12896.643 - 12949.282: 93.6161% ( 32) 00:09:57.736 12949.282 - 13001.921: 93.8021% ( 25) 00:09:57.736 13001.921 - 13054.561: 93.9881% ( 25) 00:09:57.736 13054.561 - 13107.200: 94.1146% ( 17) 00:09:57.736 13107.200 - 13159.839: 94.1890% ( 10) 00:09:57.736 13159.839 - 13212.479: 94.2708% ( 11) 00:09:57.736 13212.479 - 13265.118: 94.3750% ( 14) 00:09:57.736 13265.118 - 13317.757: 94.5238% ( 20) 00:09:57.736 13317.757 - 13370.397: 94.6577% ( 18) 00:09:57.736 13370.397 - 13423.036: 94.7247% ( 9) 00:09:57.736 13423.036 - 13475.676: 94.7768% ( 7) 00:09:57.736 13475.676 - 13580.954: 94.9330% ( 21) 00:09:57.736 13580.954 - 13686.233: 95.1190% ( 25) 00:09:57.736 13686.233 - 13791.512: 95.2902% ( 23) 00:09:57.736 13791.512 - 13896.790: 95.4315% ( 19) 00:09:57.736 13896.790 - 14002.069: 95.4836% ( 7) 00:09:57.736 14002.069 - 14107.348: 95.5952% ( 15) 00:09:57.736 14107.348 - 14212.627: 95.6994% ( 14) 00:09:57.736 14212.627 - 14317.905: 95.7961% ( 13) 00:09:57.736 14317.905 - 14423.184: 95.8705% ( 10) 00:09:57.736 14423.184 - 14528.463: 96.0119% ( 19) 00:09:57.736 14528.463 - 14633.741: 96.0863% ( 10) 00:09:57.736 14633.741 - 14739.020: 96.1384% ( 7) 00:09:57.736 14739.020 - 14844.299: 96.1979% ( 8) 00:09:57.736 14844.299 - 14949.578: 96.2798% ( 11) 00:09:57.736 14949.578 - 15054.856: 96.3542% ( 10) 00:09:57.736 15054.856 - 15160.135: 96.5253% ( 23) 00:09:57.736 15160.135 - 15265.414: 96.8080% ( 38) 00:09:57.736 15265.414 - 15370.692: 96.9866% ( 24) 00:09:57.736 15370.692 - 15475.971: 97.1205% ( 18) 00:09:57.736 15475.971 - 15581.250: 97.1429% ( 3) 00:09:57.736 16318.201 - 16423.480: 97.2321% ( 12) 00:09:57.736 16423.480 - 16528.758: 97.2470% ( 2) 00:09:57.736 16528.758 - 16634.037: 97.2842% ( 5) 00:09:57.736 16634.037 - 16739.316: 97.3289% ( 6) 00:09:57.736 16739.316 - 16844.594: 97.3958% ( 9) 00:09:57.736 16844.594 - 16949.873: 97.5521% ( 21) 00:09:57.736 16949.873 - 17055.152: 97.7902% ( 32) 00:09:57.736 17055.152 - 17160.431: 97.8795% ( 12) 00:09:57.736 17160.431 - 17265.709: 97.9613% ( 11) 00:09:57.736 17265.709 - 17370.988: 98.0357% ( 10) 00:09:57.736 17370.988 - 17476.267: 98.0952% ( 8) 00:09:57.736 17686.824 - 17792.103: 98.1027% ( 1) 00:09:57.736 17897.382 - 18002.660: 98.1324% ( 4) 00:09:57.736 18002.660 - 18107.939: 98.2366% ( 14) 00:09:57.736 18107.939 - 18213.218: 98.4152% ( 24) 00:09:57.736 18213.218 - 18318.496: 98.6607% ( 33) 00:09:57.736 18318.496 - 18423.775: 98.7872% ( 17) 00:09:57.736 18423.775 - 18529.054: 98.8690% ( 11) 00:09:57.736 18529.054 - 18634.333: 98.9137% ( 6) 00:09:57.736 18634.333 - 18739.611: 98.9658% ( 7) 00:09:57.736 18739.611 - 18844.890: 99.0030% ( 5) 00:09:57.736 18844.890 - 18950.169: 99.0476% ( 6) 00:09:57.736 29478.040 - 29688.598: 99.0848% ( 5) 00:09:57.736 29688.598 - 29899.155: 99.1443% ( 8) 00:09:57.736 29899.155 - 30109.712: 99.2039% ( 8) 00:09:57.736 30109.712 - 30320.270: 99.2634% ( 8) 00:09:57.736 30320.270 - 30530.827: 99.3229% ( 8) 00:09:57.736 30530.827 - 30741.385: 99.3899% ( 9) 00:09:57.736 30741.385 - 30951.942: 99.4420% ( 7) 00:09:57.736 30951.942 - 31162.500: 99.4940% ( 7) 00:09:57.736 31162.500 - 31373.057: 99.5238% ( 4) 00:09:57.736 37058.108 - 37268.665: 99.5833% ( 8) 00:09:57.736 37268.665 - 37479.222: 99.6354% ( 7) 00:09:57.736 37479.222 - 37689.780: 99.6875% ( 7) 00:09:57.736 37689.780 - 37900.337: 99.7470% ( 8) 00:09:57.736 37900.337 - 38110.895: 99.8065% ( 8) 00:09:57.736 38110.895 - 38321.452: 99.8661% ( 8) 00:09:57.736 38321.452 - 38532.010: 99.9256% ( 8) 00:09:57.736 38532.010 - 38742.567: 99.9777% ( 7) 00:09:57.736 38742.567 - 38953.124: 100.0000% ( 3) 00:09:57.736 00:09:57.736 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:57.736 ============================================================================== 00:09:57.736 Range in us Cumulative IO count 00:09:57.736 7106.313 - 7158.953: 0.0074% ( 1) 00:09:57.736 7158.953 - 7211.592: 0.0149% ( 1) 00:09:57.736 7211.592 - 7264.231: 0.0298% ( 2) 00:09:57.736 7264.231 - 7316.871: 0.1116% ( 11) 00:09:57.736 7316.871 - 7369.510: 0.2902% ( 24) 00:09:57.736 7369.510 - 7422.149: 0.5283% ( 32) 00:09:57.736 7422.149 - 7474.789: 1.1310% ( 81) 00:09:57.736 7474.789 - 7527.428: 1.8452% ( 96) 00:09:57.736 7527.428 - 7580.067: 2.9464% ( 148) 00:09:57.736 7580.067 - 7632.707: 4.6205% ( 225) 00:09:57.736 7632.707 - 7685.346: 6.1533% ( 206) 00:09:57.736 7685.346 - 7737.986: 8.0134% ( 250) 00:09:57.736 7737.986 - 7790.625: 9.9851% ( 265) 00:09:57.736 7790.625 - 7843.264: 12.1205% ( 287) 00:09:57.736 7843.264 - 7895.904: 14.2262% ( 283) 00:09:57.736 7895.904 - 7948.543: 16.3095% ( 280) 00:09:57.736 7948.543 - 8001.182: 18.5268% ( 298) 00:09:57.736 8001.182 - 8053.822: 20.3274% ( 242) 00:09:57.736 8053.822 - 8106.461: 22.7604% ( 327) 00:09:57.736 8106.461 - 8159.100: 24.6205% ( 250) 00:09:57.736 8159.100 - 8211.740: 26.8899% ( 305) 00:09:57.736 8211.740 - 8264.379: 29.3750% ( 334) 00:09:57.736 8264.379 - 8317.018: 32.0015% ( 353) 00:09:57.736 8317.018 - 8369.658: 34.6205% ( 352) 00:09:57.736 8369.658 - 8422.297: 36.6071% ( 267) 00:09:57.736 8422.297 - 8474.937: 38.7277% ( 285) 00:09:57.736 8474.937 - 8527.576: 40.6176% ( 254) 00:09:57.736 8527.576 - 8580.215: 42.0759% ( 196) 00:09:57.737 8580.215 - 8632.855: 43.4152% ( 180) 00:09:57.737 8632.855 - 8685.494: 44.8512% ( 193) 00:09:57.737 8685.494 - 8738.133: 46.6071% ( 236) 00:09:57.737 8738.133 - 8790.773: 48.6161% ( 270) 00:09:57.737 8790.773 - 8843.412: 50.6250% ( 270) 00:09:57.737 8843.412 - 8896.051: 52.9390% ( 311) 00:09:57.737 8896.051 - 8948.691: 55.6250% ( 361) 00:09:57.737 8948.691 - 9001.330: 58.2738% ( 356) 00:09:57.737 9001.330 - 9053.969: 61.1310% ( 384) 00:09:57.737 9053.969 - 9106.609: 64.1295% ( 403) 00:09:57.737 9106.609 - 9159.248: 66.8006% ( 359) 00:09:57.737 9159.248 - 9211.888: 69.3006% ( 336) 00:09:57.737 9211.888 - 9264.527: 71.2723% ( 265) 00:09:57.737 9264.527 - 9317.166: 72.8423% ( 211) 00:09:57.737 9317.166 - 9369.806: 74.0848% ( 167) 00:09:57.737 9369.806 - 9422.445: 75.0298% ( 127) 00:09:57.737 9422.445 - 9475.084: 76.0268% ( 134) 00:09:57.737 9475.084 - 9527.724: 76.8155% ( 106) 00:09:57.737 9527.724 - 9580.363: 77.4777% ( 89) 00:09:57.737 9580.363 - 9633.002: 78.1027% ( 84) 00:09:57.737 9633.002 - 9685.642: 78.6682% ( 76) 00:09:57.737 9685.642 - 9738.281: 79.0327% ( 49) 00:09:57.737 9738.281 - 9790.920: 79.3750% ( 46) 00:09:57.737 9790.920 - 9843.560: 79.7247% ( 47) 00:09:57.737 9843.560 - 9896.199: 80.0893% ( 49) 00:09:57.737 9896.199 - 9948.839: 80.4315% ( 46) 00:09:57.737 9948.839 - 10001.478: 80.8185% ( 52) 00:09:57.737 10001.478 - 10054.117: 81.0640% ( 33) 00:09:57.737 10054.117 - 10106.757: 81.3690% ( 41) 00:09:57.737 10106.757 - 10159.396: 81.8304% ( 62) 00:09:57.737 10159.396 - 10212.035: 82.1726% ( 46) 00:09:57.737 10212.035 - 10264.675: 82.5818% ( 55) 00:09:57.737 10264.675 - 10317.314: 83.0506% ( 63) 00:09:57.737 10317.314 - 10369.953: 83.4226% ( 50) 00:09:57.737 10369.953 - 10422.593: 83.6682% ( 33) 00:09:57.737 10422.593 - 10475.232: 83.9658% ( 40) 00:09:57.737 10475.232 - 10527.871: 84.1592% ( 26) 00:09:57.737 10527.871 - 10580.511: 84.2411% ( 11) 00:09:57.737 10580.511 - 10633.150: 84.2932% ( 7) 00:09:57.737 10633.150 - 10685.790: 84.3006% ( 1) 00:09:57.737 10738.429 - 10791.068: 84.3155% ( 2) 00:09:57.737 10791.068 - 10843.708: 84.3378% ( 3) 00:09:57.737 10843.708 - 10896.347: 84.3601% ( 3) 00:09:57.737 10896.347 - 10948.986: 84.3824% ( 3) 00:09:57.737 10948.986 - 11001.626: 84.4122% ( 4) 00:09:57.737 11001.626 - 11054.265: 84.4196% ( 1) 00:09:57.737 11054.265 - 11106.904: 84.4494% ( 4) 00:09:57.737 11106.904 - 11159.544: 84.5387% ( 12) 00:09:57.737 11159.544 - 11212.183: 84.7098% ( 23) 00:09:57.737 11212.183 - 11264.822: 84.9554% ( 33) 00:09:57.737 11264.822 - 11317.462: 85.1786% ( 30) 00:09:57.737 11317.462 - 11370.101: 85.4167% ( 32) 00:09:57.737 11370.101 - 11422.741: 85.7217% ( 41) 00:09:57.737 11422.741 - 11475.380: 85.9524% ( 31) 00:09:57.737 11475.380 - 11528.019: 86.2277% ( 37) 00:09:57.737 11528.019 - 11580.659: 86.4509% ( 30) 00:09:57.737 11580.659 - 11633.298: 86.6815% ( 31) 00:09:57.737 11633.298 - 11685.937: 86.8973% ( 29) 00:09:57.737 11685.937 - 11738.577: 87.2173% ( 43) 00:09:57.737 11738.577 - 11791.216: 87.4330% ( 29) 00:09:57.737 11791.216 - 11843.855: 87.7083% ( 37) 00:09:57.737 11843.855 - 11896.495: 88.0506% ( 46) 00:09:57.737 11896.495 - 11949.134: 88.3705% ( 43) 00:09:57.737 11949.134 - 12001.773: 88.7202% ( 47) 00:09:57.737 12001.773 - 12054.413: 88.8914% ( 23) 00:09:57.737 12054.413 - 12107.052: 89.0923% ( 27) 00:09:57.737 12107.052 - 12159.692: 89.3601% ( 36) 00:09:57.737 12159.692 - 12212.331: 89.7396% ( 51) 00:09:57.737 12212.331 - 12264.970: 89.9777% ( 32) 00:09:57.737 12264.970 - 12317.610: 90.2307% ( 34) 00:09:57.737 12317.610 - 12370.249: 90.5357% ( 41) 00:09:57.737 12370.249 - 12422.888: 90.9598% ( 57) 00:09:57.737 12422.888 - 12475.528: 91.4732% ( 69) 00:09:57.737 12475.528 - 12528.167: 91.7560% ( 38) 00:09:57.737 12528.167 - 12580.806: 92.0312% ( 37) 00:09:57.737 12580.806 - 12633.446: 92.3884% ( 48) 00:09:57.737 12633.446 - 12686.085: 92.6488% ( 35) 00:09:57.737 12686.085 - 12738.724: 92.8051% ( 21) 00:09:57.737 12738.724 - 12791.364: 93.0580% ( 34) 00:09:57.737 12791.364 - 12844.003: 93.2440% ( 25) 00:09:57.737 12844.003 - 12896.643: 93.3780% ( 18) 00:09:57.737 12896.643 - 12949.282: 93.5119% ( 18) 00:09:57.737 12949.282 - 13001.921: 93.6235% ( 15) 00:09:57.737 13001.921 - 13054.561: 93.6830% ( 8) 00:09:57.737 13054.561 - 13107.200: 93.7277% ( 6) 00:09:57.737 13107.200 - 13159.839: 93.7649% ( 5) 00:09:57.737 13159.839 - 13212.479: 93.8170% ( 7) 00:09:57.737 13212.479 - 13265.118: 93.8244% ( 1) 00:09:57.737 13265.118 - 13317.757: 93.8318% ( 1) 00:09:57.737 13317.757 - 13370.397: 93.8690% ( 5) 00:09:57.737 13370.397 - 13423.036: 93.9286% ( 8) 00:09:57.737 13423.036 - 13475.676: 94.0253% ( 13) 00:09:57.737 13475.676 - 13580.954: 94.3601% ( 45) 00:09:57.737 13580.954 - 13686.233: 94.6429% ( 38) 00:09:57.737 13686.233 - 13791.512: 95.0372% ( 53) 00:09:57.737 13791.512 - 13896.790: 95.2827% ( 33) 00:09:57.737 13896.790 - 14002.069: 95.6920% ( 55) 00:09:57.737 14002.069 - 14107.348: 95.9375% ( 33) 00:09:57.737 14107.348 - 14212.627: 96.0491% ( 15) 00:09:57.737 14212.627 - 14317.905: 96.1161% ( 9) 00:09:57.737 14317.905 - 14423.184: 96.2202% ( 14) 00:09:57.737 14423.184 - 14528.463: 96.3244% ( 14) 00:09:57.737 14528.463 - 14633.741: 96.3914% ( 9) 00:09:57.737 14633.741 - 14739.020: 96.5179% ( 17) 00:09:57.737 14739.020 - 14844.299: 96.5997% ( 11) 00:09:57.737 14844.299 - 14949.578: 96.6295% ( 4) 00:09:57.737 14949.578 - 15054.856: 96.6592% ( 4) 00:09:57.737 15054.856 - 15160.135: 96.6667% ( 1) 00:09:57.737 15791.807 - 15897.086: 96.6815% ( 2) 00:09:57.737 15897.086 - 16002.365: 96.7932% ( 15) 00:09:57.737 16002.365 - 16107.643: 97.0982% ( 41) 00:09:57.737 16107.643 - 16212.922: 97.2470% ( 20) 00:09:57.737 16212.922 - 16318.201: 97.3140% ( 9) 00:09:57.737 16318.201 - 16423.480: 97.3512% ( 5) 00:09:57.737 16423.480 - 16528.758: 97.3958% ( 6) 00:09:57.737 16528.758 - 16634.037: 97.4330% ( 5) 00:09:57.737 16634.037 - 16739.316: 97.4777% ( 6) 00:09:57.737 16739.316 - 16844.594: 97.5223% ( 6) 00:09:57.737 16844.594 - 16949.873: 97.5670% ( 6) 00:09:57.737 16949.873 - 17055.152: 97.6116% ( 6) 00:09:57.737 17055.152 - 17160.431: 97.6190% ( 1) 00:09:57.737 17160.431 - 17265.709: 97.6637% ( 6) 00:09:57.737 17265.709 - 17370.988: 97.7753% ( 15) 00:09:57.737 17370.988 - 17476.267: 97.9613% ( 25) 00:09:57.737 17476.267 - 17581.545: 98.1622% ( 27) 00:09:57.737 17581.545 - 17686.824: 98.2292% ( 9) 00:09:57.737 17686.824 - 17792.103: 98.3036% ( 10) 00:09:57.737 17792.103 - 17897.382: 98.3929% ( 12) 00:09:57.737 17897.382 - 18002.660: 98.4673% ( 10) 00:09:57.737 18002.660 - 18107.939: 98.5193% ( 7) 00:09:57.737 18107.939 - 18213.218: 98.5789% ( 8) 00:09:57.737 18213.218 - 18318.496: 98.6830% ( 14) 00:09:57.737 18318.496 - 18423.775: 98.7426% ( 8) 00:09:57.737 18423.775 - 18529.054: 98.7649% ( 3) 00:09:57.737 18529.054 - 18634.333: 98.7946% ( 4) 00:09:57.737 18634.333 - 18739.611: 98.8393% ( 6) 00:09:57.737 18739.611 - 18844.890: 98.8839% ( 6) 00:09:57.737 18844.890 - 18950.169: 98.9360% ( 7) 00:09:57.737 18950.169 - 19055.447: 98.9732% ( 5) 00:09:57.737 19055.447 - 19160.726: 99.0179% ( 6) 00:09:57.737 19160.726 - 19266.005: 99.0476% ( 4) 00:09:57.737 27583.023 - 27793.581: 99.0923% ( 6) 00:09:57.737 27793.581 - 28004.138: 99.1443% ( 7) 00:09:57.737 28004.138 - 28214.696: 99.2113% ( 9) 00:09:57.737 28214.696 - 28425.253: 99.2708% ( 8) 00:09:57.737 28425.253 - 28635.810: 99.3304% ( 8) 00:09:57.737 28635.810 - 28846.368: 99.3899% ( 8) 00:09:57.737 28846.368 - 29056.925: 99.4494% ( 8) 00:09:57.737 29056.925 - 29267.483: 99.5089% ( 8) 00:09:57.737 29267.483 - 29478.040: 99.5238% ( 2) 00:09:57.737 35163.091 - 35373.648: 99.5461% ( 3) 00:09:57.737 35373.648 - 35584.206: 99.6057% ( 8) 00:09:57.737 35584.206 - 35794.763: 99.6652% ( 8) 00:09:57.737 35794.763 - 36005.320: 99.7173% ( 7) 00:09:57.737 36005.320 - 36215.878: 99.7768% ( 8) 00:09:57.737 36215.878 - 36426.435: 99.8363% ( 8) 00:09:57.737 36426.435 - 36636.993: 99.8958% ( 8) 00:09:57.737 36636.993 - 36847.550: 99.9554% ( 8) 00:09:57.737 36847.550 - 37058.108: 100.0000% ( 6) 00:09:57.737 00:09:57.737 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:57.737 ============================================================================== 00:09:57.737 Range in us Cumulative IO count 00:09:57.737 7106.313 - 7158.953: 0.0074% ( 1) 00:09:57.737 7158.953 - 7211.592: 0.0149% ( 1) 00:09:57.737 7211.592 - 7264.231: 0.0744% ( 8) 00:09:57.737 7264.231 - 7316.871: 0.1637% ( 12) 00:09:57.737 7316.871 - 7369.510: 0.3423% ( 24) 00:09:57.737 7369.510 - 7422.149: 0.7143% ( 50) 00:09:57.737 7422.149 - 7474.789: 1.2500% ( 72) 00:09:57.737 7474.789 - 7527.428: 2.1057% ( 115) 00:09:57.738 7527.428 - 7580.067: 3.3333% ( 165) 00:09:57.738 7580.067 - 7632.707: 4.8065% ( 198) 00:09:57.738 7632.707 - 7685.346: 6.5104% ( 229) 00:09:57.738 7685.346 - 7737.986: 8.2589% ( 235) 00:09:57.738 7737.986 - 7790.625: 10.0074% ( 235) 00:09:57.738 7790.625 - 7843.264: 11.7634% ( 236) 00:09:57.738 7843.264 - 7895.904: 13.4896% ( 232) 00:09:57.738 7895.904 - 7948.543: 15.2158% ( 232) 00:09:57.738 7948.543 - 8001.182: 17.1429% ( 259) 00:09:57.738 8001.182 - 8053.822: 19.3229% ( 293) 00:09:57.738 8053.822 - 8106.461: 22.0610% ( 368) 00:09:57.738 8106.461 - 8159.100: 24.6801% ( 352) 00:09:57.738 8159.100 - 8211.740: 27.3661% ( 361) 00:09:57.738 8211.740 - 8264.379: 30.0595% ( 362) 00:09:57.738 8264.379 - 8317.018: 32.6414% ( 347) 00:09:57.738 8317.018 - 8369.658: 35.3051% ( 358) 00:09:57.738 8369.658 - 8422.297: 38.1845% ( 387) 00:09:57.738 8422.297 - 8474.937: 40.2381% ( 276) 00:09:57.738 8474.937 - 8527.576: 41.9792% ( 234) 00:09:57.738 8527.576 - 8580.215: 43.5045% ( 205) 00:09:57.738 8580.215 - 8632.855: 45.1042% ( 215) 00:09:57.738 8632.855 - 8685.494: 46.3616% ( 169) 00:09:57.738 8685.494 - 8738.133: 47.7455% ( 186) 00:09:57.738 8738.133 - 8790.773: 49.6801% ( 260) 00:09:57.738 8790.773 - 8843.412: 51.6443% ( 264) 00:09:57.738 8843.412 - 8896.051: 53.8988% ( 303) 00:09:57.738 8896.051 - 8948.691: 56.2500% ( 316) 00:09:57.738 8948.691 - 9001.330: 58.2515% ( 269) 00:09:57.738 9001.330 - 9053.969: 60.5580% ( 310) 00:09:57.738 9053.969 - 9106.609: 62.7902% ( 300) 00:09:57.738 9106.609 - 9159.248: 65.1116% ( 312) 00:09:57.738 9159.248 - 9211.888: 67.4926% ( 320) 00:09:57.738 9211.888 - 9264.527: 69.5015% ( 270) 00:09:57.738 9264.527 - 9317.166: 71.3765% ( 252) 00:09:57.738 9317.166 - 9369.806: 73.0506% ( 225) 00:09:57.738 9369.806 - 9422.445: 74.2411% ( 160) 00:09:57.738 9422.445 - 9475.084: 75.4315% ( 160) 00:09:57.738 9475.084 - 9527.724: 76.2872% ( 115) 00:09:57.738 9527.724 - 9580.363: 77.0982% ( 109) 00:09:57.738 9580.363 - 9633.002: 77.8571% ( 102) 00:09:57.738 9633.002 - 9685.642: 78.4449% ( 79) 00:09:57.738 9685.642 - 9738.281: 79.0030% ( 75) 00:09:57.738 9738.281 - 9790.920: 79.6652% ( 89) 00:09:57.738 9790.920 - 9843.560: 80.2009% ( 72) 00:09:57.738 9843.560 - 9896.199: 80.7515% ( 74) 00:09:57.738 9896.199 - 9948.839: 81.3244% ( 77) 00:09:57.738 9948.839 - 10001.478: 81.7262% ( 54) 00:09:57.738 10001.478 - 10054.117: 82.0908% ( 49) 00:09:57.738 10054.117 - 10106.757: 82.3810% ( 39) 00:09:57.738 10106.757 - 10159.396: 82.5893% ( 28) 00:09:57.738 10159.396 - 10212.035: 82.8199% ( 31) 00:09:57.738 10212.035 - 10264.675: 83.0357% ( 29) 00:09:57.738 10264.675 - 10317.314: 83.2292% ( 26) 00:09:57.738 10317.314 - 10369.953: 83.4524% ( 30) 00:09:57.738 10369.953 - 10422.593: 83.5938% ( 19) 00:09:57.738 10422.593 - 10475.232: 83.6682% ( 10) 00:09:57.738 10475.232 - 10527.871: 83.7351% ( 9) 00:09:57.738 10527.871 - 10580.511: 83.7649% ( 4) 00:09:57.738 10580.511 - 10633.150: 83.8021% ( 5) 00:09:57.738 10633.150 - 10685.790: 83.8467% ( 6) 00:09:57.738 10685.790 - 10738.429: 83.9137% ( 9) 00:09:57.738 10738.429 - 10791.068: 83.9807% ( 9) 00:09:57.738 10791.068 - 10843.708: 84.1443% ( 22) 00:09:57.738 10843.708 - 10896.347: 84.2485% ( 14) 00:09:57.738 10896.347 - 10948.986: 84.3601% ( 15) 00:09:57.738 10948.986 - 11001.626: 84.4643% ( 14) 00:09:57.738 11001.626 - 11054.265: 84.6726% ( 28) 00:09:57.738 11054.265 - 11106.904: 84.8289% ( 21) 00:09:57.738 11106.904 - 11159.544: 84.9702% ( 19) 00:09:57.738 11159.544 - 11212.183: 85.2381% ( 36) 00:09:57.738 11212.183 - 11264.822: 85.4911% ( 34) 00:09:57.738 11264.822 - 11317.462: 85.7068% ( 29) 00:09:57.738 11317.462 - 11370.101: 85.8854% ( 24) 00:09:57.738 11370.101 - 11422.741: 86.0045% ( 16) 00:09:57.738 11422.741 - 11475.380: 86.1905% ( 25) 00:09:57.738 11475.380 - 11528.019: 86.4658% ( 37) 00:09:57.738 11528.019 - 11580.659: 86.6741% ( 28) 00:09:57.738 11580.659 - 11633.298: 87.0461% ( 50) 00:09:57.738 11633.298 - 11685.937: 87.4107% ( 49) 00:09:57.738 11685.937 - 11738.577: 87.8795% ( 63) 00:09:57.738 11738.577 - 11791.216: 88.3036% ( 57) 00:09:57.738 11791.216 - 11843.855: 88.5491% ( 33) 00:09:57.738 11843.855 - 11896.495: 88.8244% ( 37) 00:09:57.738 11896.495 - 11949.134: 89.0848% ( 35) 00:09:57.738 11949.134 - 12001.773: 89.2560% ( 23) 00:09:57.738 12001.773 - 12054.413: 89.4122% ( 21) 00:09:57.738 12054.413 - 12107.052: 89.5908% ( 24) 00:09:57.738 12107.052 - 12159.692: 89.7321% ( 19) 00:09:57.738 12159.692 - 12212.331: 89.9628% ( 31) 00:09:57.738 12212.331 - 12264.970: 90.1711% ( 28) 00:09:57.738 12264.970 - 12317.610: 90.4688% ( 40) 00:09:57.738 12317.610 - 12370.249: 90.6622% ( 26) 00:09:57.738 12370.249 - 12422.888: 91.0119% ( 47) 00:09:57.738 12422.888 - 12475.528: 91.3988% ( 52) 00:09:57.738 12475.528 - 12528.167: 91.6890% ( 39) 00:09:57.738 12528.167 - 12580.806: 92.0908% ( 54) 00:09:57.738 12580.806 - 12633.446: 92.4926% ( 54) 00:09:57.738 12633.446 - 12686.085: 92.8423% ( 47) 00:09:57.738 12686.085 - 12738.724: 93.1027% ( 35) 00:09:57.738 12738.724 - 12791.364: 93.2961% ( 26) 00:09:57.738 12791.364 - 12844.003: 93.4375% ( 19) 00:09:57.738 12844.003 - 12896.643: 93.6086% ( 23) 00:09:57.738 12896.643 - 12949.282: 93.6607% ( 7) 00:09:57.738 12949.282 - 13001.921: 93.6979% ( 5) 00:09:57.738 13001.921 - 13054.561: 93.7426% ( 6) 00:09:57.738 13054.561 - 13107.200: 93.7723% ( 4) 00:09:57.738 13107.200 - 13159.839: 93.8170% ( 6) 00:09:57.738 13159.839 - 13212.479: 93.8542% ( 5) 00:09:57.738 13212.479 - 13265.118: 93.8765% ( 3) 00:09:57.738 13265.118 - 13317.757: 93.9062% ( 4) 00:09:57.738 13317.757 - 13370.397: 93.9509% ( 6) 00:09:57.738 13370.397 - 13423.036: 93.9881% ( 5) 00:09:57.738 13423.036 - 13475.676: 94.0253% ( 5) 00:09:57.738 13475.676 - 13580.954: 94.1146% ( 12) 00:09:57.738 13580.954 - 13686.233: 94.1964% ( 11) 00:09:57.738 13686.233 - 13791.512: 94.3452% ( 20) 00:09:57.738 13791.512 - 13896.790: 94.5387% ( 26) 00:09:57.738 13896.790 - 14002.069: 94.8512% ( 42) 00:09:57.738 14002.069 - 14107.348: 95.0372% ( 25) 00:09:57.738 14107.348 - 14212.627: 95.1488% ( 15) 00:09:57.738 14212.627 - 14317.905: 95.2679% ( 16) 00:09:57.738 14317.905 - 14423.184: 95.3869% ( 16) 00:09:57.738 14423.184 - 14528.463: 95.6250% ( 32) 00:09:57.738 14528.463 - 14633.741: 96.0863% ( 62) 00:09:57.738 14633.741 - 14739.020: 96.2128% ( 17) 00:09:57.738 14739.020 - 14844.299: 96.3690% ( 21) 00:09:57.738 14844.299 - 14949.578: 96.5551% ( 25) 00:09:57.738 14949.578 - 15054.856: 96.5923% ( 5) 00:09:57.738 15054.856 - 15160.135: 96.6295% ( 5) 00:09:57.738 15160.135 - 15265.414: 96.6592% ( 4) 00:09:57.738 15265.414 - 15370.692: 96.6667% ( 1) 00:09:57.738 15581.250 - 15686.529: 96.7411% ( 10) 00:09:57.738 15686.529 - 15791.807: 96.8155% ( 10) 00:09:57.738 15791.807 - 15897.086: 96.8750% ( 8) 00:09:57.738 15897.086 - 16002.365: 96.8899% ( 2) 00:09:57.738 16002.365 - 16107.643: 96.9048% ( 2) 00:09:57.738 16107.643 - 16212.922: 96.9345% ( 4) 00:09:57.738 16212.922 - 16318.201: 97.0536% ( 16) 00:09:57.738 16318.201 - 16423.480: 97.1726% ( 16) 00:09:57.738 16423.480 - 16528.758: 97.3289% ( 21) 00:09:57.738 16528.758 - 16634.037: 97.4777% ( 20) 00:09:57.738 16634.037 - 16739.316: 97.6042% ( 17) 00:09:57.738 16739.316 - 16844.594: 97.7083% ( 14) 00:09:57.738 16844.594 - 16949.873: 97.8125% ( 14) 00:09:57.738 16949.873 - 17055.152: 97.9092% ( 13) 00:09:57.738 17055.152 - 17160.431: 97.9688% ( 8) 00:09:57.738 17160.431 - 17265.709: 98.0208% ( 7) 00:09:57.738 17265.709 - 17370.988: 98.0580% ( 5) 00:09:57.738 17370.988 - 17476.267: 98.0878% ( 4) 00:09:57.738 17476.267 - 17581.545: 98.1696% ( 11) 00:09:57.738 17581.545 - 17686.824: 98.3185% ( 20) 00:09:57.738 17686.824 - 17792.103: 98.4747% ( 21) 00:09:57.738 17792.103 - 17897.382: 98.5045% ( 4) 00:09:57.738 17897.382 - 18002.660: 98.5417% ( 5) 00:09:57.738 18002.660 - 18107.939: 98.5714% ( 4) 00:09:57.738 18634.333 - 18739.611: 98.6086% ( 5) 00:09:57.738 18739.611 - 18844.890: 98.6905% ( 11) 00:09:57.738 18844.890 - 18950.169: 98.7649% ( 10) 00:09:57.738 18950.169 - 19055.447: 98.8244% ( 8) 00:09:57.738 19055.447 - 19160.726: 98.8542% ( 4) 00:09:57.738 19160.726 - 19266.005: 98.8765% ( 3) 00:09:57.738 19266.005 - 19371.284: 98.9211% ( 6) 00:09:57.738 19371.284 - 19476.562: 98.9658% ( 6) 00:09:57.738 19476.562 - 19581.841: 99.0104% ( 6) 00:09:57.738 19581.841 - 19687.120: 99.0476% ( 5) 00:09:57.738 26109.121 - 26214.400: 99.0774% ( 4) 00:09:57.739 26214.400 - 26319.679: 99.1071% ( 4) 00:09:57.739 26319.679 - 26424.957: 99.1369% ( 4) 00:09:57.739 26424.957 - 26530.236: 99.1592% ( 3) 00:09:57.739 26530.236 - 26635.515: 99.1890% ( 4) 00:09:57.739 26635.515 - 26740.794: 99.2262% ( 5) 00:09:57.739 26740.794 - 26846.072: 99.2485% ( 3) 00:09:57.739 26846.072 - 26951.351: 99.2783% ( 4) 00:09:57.739 26951.351 - 27161.908: 99.3378% ( 8) 00:09:57.739 27161.908 - 27372.466: 99.4048% ( 9) 00:09:57.739 27372.466 - 27583.023: 99.4643% ( 8) 00:09:57.739 27583.023 - 27793.581: 99.5164% ( 7) 00:09:57.739 27793.581 - 28004.138: 99.5238% ( 1) 00:09:57.739 33689.189 - 33899.746: 99.5610% ( 5) 00:09:57.739 33899.746 - 34110.304: 99.6205% ( 8) 00:09:57.739 34110.304 - 34320.861: 99.6801% ( 8) 00:09:57.739 34320.861 - 34531.418: 99.7321% ( 7) 00:09:57.739 34531.418 - 34741.976: 99.7917% ( 8) 00:09:57.739 34741.976 - 34952.533: 99.8438% ( 7) 00:09:57.739 34952.533 - 35163.091: 99.8958% ( 7) 00:09:57.739 35163.091 - 35373.648: 99.9554% ( 8) 00:09:57.739 35373.648 - 35584.206: 100.0000% ( 6) 00:09:57.739 00:09:57.739 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:57.739 ============================================================================== 00:09:57.739 Range in us Cumulative IO count 00:09:57.739 7106.313 - 7158.953: 0.0074% ( 1) 00:09:57.739 7158.953 - 7211.592: 0.0444% ( 5) 00:09:57.739 7211.592 - 7264.231: 0.1185% ( 10) 00:09:57.739 7264.231 - 7316.871: 0.1999% ( 11) 00:09:57.739 7316.871 - 7369.510: 0.4813% ( 38) 00:09:57.739 7369.510 - 7422.149: 0.7775% ( 40) 00:09:57.739 7422.149 - 7474.789: 1.2367% ( 62) 00:09:57.739 7474.789 - 7527.428: 2.0957% ( 116) 00:09:57.739 7527.428 - 7580.067: 3.0806% ( 133) 00:09:57.739 7580.067 - 7632.707: 4.5690% ( 201) 00:09:57.739 7632.707 - 7685.346: 6.0204% ( 196) 00:09:57.739 7685.346 - 7737.986: 7.3682% ( 182) 00:09:57.739 7737.986 - 7790.625: 9.3158% ( 263) 00:09:57.739 7790.625 - 7843.264: 11.1523% ( 248) 00:09:57.739 7843.264 - 7895.904: 12.9221% ( 239) 00:09:57.739 7895.904 - 7948.543: 15.4028% ( 335) 00:09:57.739 7948.543 - 8001.182: 17.6318% ( 301) 00:09:57.739 8001.182 - 8053.822: 19.9941% ( 319) 00:09:57.739 8053.822 - 8106.461: 22.6007% ( 352) 00:09:57.739 8106.461 - 8159.100: 24.9556% ( 318) 00:09:57.739 8159.100 - 8211.740: 27.5178% ( 346) 00:09:57.739 8211.740 - 8264.379: 29.9319% ( 326) 00:09:57.739 8264.379 - 8317.018: 32.7088% ( 375) 00:09:57.739 8317.018 - 8369.658: 35.0267% ( 313) 00:09:57.739 8369.658 - 8422.297: 37.4185% ( 323) 00:09:57.739 8422.297 - 8474.937: 39.6993% ( 308) 00:09:57.739 8474.937 - 8527.576: 41.6543% ( 264) 00:09:57.739 8527.576 - 8580.215: 43.2464% ( 215) 00:09:57.739 8580.215 - 8632.855: 44.7793% ( 207) 00:09:57.739 8632.855 - 8685.494: 46.1937% ( 191) 00:09:57.739 8685.494 - 8738.133: 47.4452% ( 169) 00:09:57.739 8738.133 - 8790.773: 48.8818% ( 194) 00:09:57.739 8790.773 - 8843.412: 50.6665% ( 241) 00:09:57.739 8843.412 - 8896.051: 52.4882% ( 246) 00:09:57.739 8896.051 - 8948.691: 54.4209% ( 261) 00:09:57.739 8948.691 - 9001.330: 57.3534% ( 396) 00:09:57.739 9001.330 - 9053.969: 60.1303% ( 375) 00:09:57.739 9053.969 - 9106.609: 62.5889% ( 332) 00:09:57.739 9106.609 - 9159.248: 65.2029% ( 353) 00:09:57.739 9159.248 - 9211.888: 67.8984% ( 364) 00:09:57.739 9211.888 - 9264.527: 69.8312% ( 261) 00:09:57.739 9264.527 - 9317.166: 71.4085% ( 213) 00:09:57.739 9317.166 - 9369.806: 72.9191% ( 204) 00:09:57.739 9369.806 - 9422.445: 74.3928% ( 199) 00:09:57.739 9422.445 - 9475.084: 75.5332% ( 154) 00:09:57.739 9475.084 - 9527.724: 76.3996% ( 117) 00:09:57.739 9527.724 - 9580.363: 77.1179% ( 97) 00:09:57.739 9580.363 - 9633.002: 77.6881% ( 77) 00:09:57.739 9633.002 - 9685.642: 78.1694% ( 65) 00:09:57.739 9685.642 - 9738.281: 78.6137% ( 60) 00:09:57.739 9738.281 - 9790.920: 79.0062% ( 53) 00:09:57.739 9790.920 - 9843.560: 79.5764% ( 77) 00:09:57.739 9843.560 - 9896.199: 80.0504% ( 64) 00:09:57.739 9896.199 - 9948.839: 80.4280% ( 51) 00:09:57.739 9948.839 - 10001.478: 80.8057% ( 51) 00:09:57.739 10001.478 - 10054.117: 81.1463% ( 46) 00:09:57.739 10054.117 - 10106.757: 81.6499% ( 68) 00:09:57.739 10106.757 - 10159.396: 82.0794% ( 58) 00:09:57.739 10159.396 - 10212.035: 82.4496% ( 50) 00:09:57.739 10212.035 - 10264.675: 82.7162% ( 36) 00:09:57.739 10264.675 - 10317.314: 82.8569% ( 19) 00:09:57.739 10317.314 - 10369.953: 82.9754% ( 16) 00:09:57.739 10369.953 - 10422.593: 83.0791% ( 14) 00:09:57.739 10422.593 - 10475.232: 83.1680% ( 12) 00:09:57.739 10475.232 - 10527.871: 83.2346% ( 9) 00:09:57.739 10527.871 - 10580.511: 83.2864% ( 7) 00:09:57.739 10580.511 - 10633.150: 83.3383% ( 7) 00:09:57.739 10633.150 - 10685.790: 83.3605% ( 3) 00:09:57.739 10685.790 - 10738.429: 83.3753% ( 2) 00:09:57.739 10738.429 - 10791.068: 83.4123% ( 5) 00:09:57.739 10791.068 - 10843.708: 83.4568% ( 6) 00:09:57.739 10843.708 - 10896.347: 83.5086% ( 7) 00:09:57.739 10896.347 - 10948.986: 83.5604% ( 7) 00:09:57.739 10948.986 - 11001.626: 83.7085% ( 20) 00:09:57.739 11001.626 - 11054.265: 83.9751% ( 36) 00:09:57.739 11054.265 - 11106.904: 84.1677% ( 26) 00:09:57.739 11106.904 - 11159.544: 84.4342% ( 36) 00:09:57.739 11159.544 - 11212.183: 84.6638% ( 31) 00:09:57.739 11212.183 - 11264.822: 84.9156% ( 34) 00:09:57.739 11264.822 - 11317.462: 85.2710% ( 48) 00:09:57.739 11317.462 - 11370.101: 85.6043% ( 45) 00:09:57.739 11370.101 - 11422.741: 85.9449% ( 46) 00:09:57.739 11422.741 - 11475.380: 86.2189% ( 37) 00:09:57.739 11475.380 - 11528.019: 86.4559% ( 32) 00:09:57.739 11528.019 - 11580.659: 86.6262% ( 23) 00:09:57.739 11580.659 - 11633.298: 86.7965% ( 23) 00:09:57.739 11633.298 - 11685.937: 87.0335% ( 32) 00:09:57.739 11685.937 - 11738.577: 87.3001% ( 36) 00:09:57.739 11738.577 - 11791.216: 87.6555% ( 48) 00:09:57.739 11791.216 - 11843.855: 87.9147% ( 35) 00:09:57.739 11843.855 - 11896.495: 88.1961% ( 38) 00:09:57.739 11896.495 - 11949.134: 88.4182% ( 30) 00:09:57.739 11949.134 - 12001.773: 88.6404% ( 30) 00:09:57.739 12001.773 - 12054.413: 88.9292% ( 39) 00:09:57.739 12054.413 - 12107.052: 89.1884% ( 35) 00:09:57.739 12107.052 - 12159.692: 89.4846% ( 40) 00:09:57.739 12159.692 - 12212.331: 89.6401% ( 21) 00:09:57.739 12212.331 - 12264.970: 89.8104% ( 23) 00:09:57.739 12264.970 - 12317.610: 90.0992% ( 39) 00:09:57.739 12317.610 - 12370.249: 90.3140% ( 29) 00:09:57.739 12370.249 - 12422.888: 90.6028% ( 39) 00:09:57.739 12422.888 - 12475.528: 90.8472% ( 33) 00:09:57.739 12475.528 - 12528.167: 91.2915% ( 60) 00:09:57.739 12528.167 - 12580.806: 91.8395% ( 74) 00:09:57.739 12580.806 - 12633.446: 92.1653% ( 44) 00:09:57.739 12633.446 - 12686.085: 92.5133% ( 47) 00:09:57.739 12686.085 - 12738.724: 92.8021% ( 39) 00:09:57.739 12738.724 - 12791.364: 93.1057% ( 41) 00:09:57.739 12791.364 - 12844.003: 93.3057% ( 27) 00:09:57.739 12844.003 - 12896.643: 93.5427% ( 32) 00:09:57.739 12896.643 - 12949.282: 93.6611% ( 16) 00:09:57.739 12949.282 - 13001.921: 93.7574% ( 13) 00:09:57.739 13001.921 - 13054.561: 93.8315% ( 10) 00:09:57.739 13054.561 - 13107.200: 93.8907% ( 8) 00:09:57.739 13107.200 - 13159.839: 93.9277% ( 5) 00:09:57.739 13159.839 - 13212.479: 93.9648% ( 5) 00:09:57.739 13212.479 - 13265.118: 94.0018% ( 5) 00:09:57.739 13265.118 - 13317.757: 94.0240% ( 3) 00:09:57.739 13317.757 - 13370.397: 94.1055% ( 11) 00:09:57.739 13370.397 - 13423.036: 94.2536% ( 20) 00:09:57.739 13423.036 - 13475.676: 94.2906% ( 5) 00:09:57.739 13475.676 - 13580.954: 94.3794% ( 12) 00:09:57.739 13580.954 - 13686.233: 94.4683% ( 12) 00:09:57.739 13686.233 - 13791.512: 94.5646% ( 13) 00:09:57.739 13791.512 - 13896.790: 94.6238% ( 8) 00:09:57.739 13896.790 - 14002.069: 94.7571% ( 18) 00:09:57.739 14002.069 - 14107.348: 94.8460% ( 12) 00:09:57.739 14107.348 - 14212.627: 94.9941% ( 20) 00:09:57.739 14212.627 - 14317.905: 95.1866% ( 26) 00:09:57.739 14317.905 - 14423.184: 95.3051% ( 16) 00:09:57.739 14423.184 - 14528.463: 95.4754% ( 23) 00:09:57.739 14528.463 - 14633.741: 95.6161% ( 19) 00:09:57.739 14633.741 - 14739.020: 95.6828% ( 9) 00:09:57.739 14739.020 - 14844.299: 95.7346% ( 7) 00:09:57.739 14844.299 - 14949.578: 95.7420% ( 1) 00:09:57.739 15054.856 - 15160.135: 95.8901% ( 20) 00:09:57.739 15160.135 - 15265.414: 96.1641% ( 37) 00:09:57.739 15265.414 - 15370.692: 96.3863% ( 30) 00:09:57.739 15370.692 - 15475.971: 96.5640% ( 24) 00:09:57.739 15475.971 - 15581.250: 96.7565% ( 26) 00:09:57.739 15581.250 - 15686.529: 96.9342% ( 24) 00:09:57.739 15686.529 - 15791.807: 97.0083% ( 10) 00:09:57.739 15791.807 - 15897.086: 97.0453% ( 5) 00:09:57.739 15897.086 - 16002.365: 97.1564% ( 15) 00:09:57.739 16002.365 - 16107.643: 97.2675% ( 15) 00:09:57.740 16107.643 - 16212.922: 97.3711% ( 14) 00:09:57.740 16212.922 - 16318.201: 97.5044% ( 18) 00:09:57.740 16318.201 - 16423.480: 97.6007% ( 13) 00:09:57.740 16423.480 - 16528.758: 97.6896% ( 12) 00:09:57.740 16528.758 - 16634.037: 97.7710% ( 11) 00:09:57.740 16634.037 - 16739.316: 97.8821% ( 15) 00:09:57.740 16739.316 - 16844.594: 97.9562% ( 10) 00:09:57.740 16844.594 - 16949.873: 98.0154% ( 8) 00:09:57.740 16949.873 - 17055.152: 98.0524% ( 5) 00:09:57.740 17055.152 - 17160.431: 98.0820% ( 4) 00:09:57.740 17160.431 - 17265.709: 98.1043% ( 3) 00:09:57.740 17265.709 - 17370.988: 98.1117% ( 1) 00:09:57.740 17476.267 - 17581.545: 98.1339% ( 3) 00:09:57.740 17581.545 - 17686.824: 98.2302% ( 13) 00:09:57.740 17686.824 - 17792.103: 98.4597% ( 31) 00:09:57.740 17792.103 - 17897.382: 98.4893% ( 4) 00:09:57.740 17897.382 - 18002.660: 98.5560% ( 9) 00:09:57.740 18002.660 - 18107.939: 98.6597% ( 14) 00:09:57.740 18107.939 - 18213.218: 98.7707% ( 15) 00:09:57.740 18213.218 - 18318.496: 98.8448% ( 10) 00:09:57.740 18318.496 - 18423.775: 98.8966% ( 7) 00:09:57.740 18423.775 - 18529.054: 98.9559% ( 8) 00:09:57.740 18529.054 - 18634.333: 99.0299% ( 10) 00:09:57.740 18634.333 - 18739.611: 99.1040% ( 10) 00:09:57.740 18739.611 - 18844.890: 99.1780% ( 10) 00:09:57.740 18844.890 - 18950.169: 99.2595% ( 11) 00:09:57.740 18950.169 - 19055.447: 99.3187% ( 8) 00:09:57.740 19055.447 - 19160.726: 99.3483% ( 4) 00:09:57.740 19160.726 - 19266.005: 99.3706% ( 3) 00:09:57.740 19266.005 - 19371.284: 99.4076% ( 5) 00:09:57.740 19371.284 - 19476.562: 99.4372% ( 4) 00:09:57.740 19476.562 - 19581.841: 99.4668% ( 4) 00:09:57.740 19581.841 - 19687.120: 99.5039% ( 5) 00:09:57.740 19687.120 - 19792.398: 99.5261% ( 3) 00:09:57.740 25582.728 - 25688.006: 99.5409% ( 2) 00:09:57.740 25688.006 - 25793.285: 99.5631% ( 3) 00:09:57.740 25793.285 - 25898.564: 99.5927% ( 4) 00:09:57.740 25898.564 - 26003.843: 99.6297% ( 5) 00:09:57.740 26003.843 - 26109.121: 99.6594% ( 4) 00:09:57.740 26109.121 - 26214.400: 99.6890% ( 4) 00:09:57.740 26214.400 - 26319.679: 99.7112% ( 3) 00:09:57.740 26319.679 - 26424.957: 99.7334% ( 3) 00:09:57.740 26424.957 - 26530.236: 99.7630% ( 4) 00:09:57.740 26530.236 - 26635.515: 99.7927% ( 4) 00:09:57.740 26635.515 - 26740.794: 99.8223% ( 4) 00:09:57.740 26740.794 - 26846.072: 99.8519% ( 4) 00:09:57.740 26846.072 - 26951.351: 99.8815% ( 4) 00:09:57.740 26951.351 - 27161.908: 99.9408% ( 8) 00:09:57.740 27161.908 - 27372.466: 100.0000% ( 8) 00:09:57.740 00:09:57.740 10:21:56 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:57.740 00:09:57.740 real 0m2.702s 00:09:57.740 user 0m2.264s 00:09:57.740 sys 0m0.330s 00:09:57.740 10:21:56 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.740 10:21:56 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:57.740 ************************************ 00:09:57.740 END TEST nvme_perf 00:09:57.740 ************************************ 00:09:57.740 10:21:56 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:57.740 10:21:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:57.740 10:21:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.740 10:21:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.740 ************************************ 00:09:57.740 START TEST nvme_hello_world 00:09:57.740 ************************************ 00:09:57.740 10:21:56 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:57.999 Initializing NVMe Controllers 00:09:57.999 Attached to 0000:00:10.0 00:09:57.999 Namespace ID: 1 size: 6GB 00:09:57.999 Attached to 0000:00:11.0 00:09:57.999 Namespace ID: 1 size: 5GB 00:09:57.999 Attached to 0000:00:13.0 00:09:57.999 Namespace ID: 1 size: 1GB 00:09:57.999 Attached to 0000:00:12.0 00:09:57.999 Namespace ID: 1 size: 4GB 00:09:57.999 Namespace ID: 2 size: 4GB 00:09:57.999 Namespace ID: 3 size: 4GB 00:09:57.999 Initialization complete. 00:09:57.999 INFO: using host memory buffer for IO 00:09:57.999 Hello world! 00:09:57.999 INFO: using host memory buffer for IO 00:09:57.999 Hello world! 00:09:57.999 INFO: using host memory buffer for IO 00:09:57.999 Hello world! 00:09:57.999 INFO: using host memory buffer for IO 00:09:57.999 Hello world! 00:09:57.999 INFO: using host memory buffer for IO 00:09:57.999 Hello world! 00:09:57.999 INFO: using host memory buffer for IO 00:09:57.999 Hello world! 00:09:57.999 00:09:57.999 real 0m0.316s 00:09:57.999 user 0m0.119s 00:09:57.999 sys 0m0.146s 00:09:57.999 10:21:57 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.999 10:21:57 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:57.999 ************************************ 00:09:57.999 END TEST nvme_hello_world 00:09:57.999 ************************************ 00:09:58.000 10:21:57 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:58.000 10:21:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.000 10:21:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.000 10:21:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.000 ************************************ 00:09:58.000 START TEST nvme_sgl 00:09:58.000 ************************************ 00:09:58.000 10:21:57 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:58.261 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:58.261 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:58.261 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:58.261 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:58.261 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:58.261 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:58.261 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:58.261 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:58.261 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:58.261 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:58.261 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:58.261 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:58.261 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:58.261 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:58.261 NVMe Readv/Writev Request test 00:09:58.261 Attached to 0000:00:10.0 00:09:58.261 Attached to 0000:00:11.0 00:09:58.261 Attached to 0000:00:13.0 00:09:58.261 Attached to 0000:00:12.0 00:09:58.261 0000:00:10.0: build_io_request_2 test passed 00:09:58.261 0000:00:10.0: build_io_request_4 test passed 00:09:58.261 0000:00:10.0: build_io_request_5 test passed 00:09:58.261 0000:00:10.0: build_io_request_6 test passed 00:09:58.261 0000:00:10.0: build_io_request_7 test passed 00:09:58.261 0000:00:10.0: build_io_request_10 test passed 00:09:58.261 0000:00:11.0: build_io_request_2 test passed 00:09:58.261 0000:00:11.0: build_io_request_4 test passed 00:09:58.261 0000:00:11.0: build_io_request_5 test passed 00:09:58.261 0000:00:11.0: build_io_request_6 test passed 00:09:58.261 0000:00:11.0: build_io_request_7 test passed 00:09:58.261 0000:00:11.0: build_io_request_10 test passed 00:09:58.261 Cleaning up... 00:09:58.261 00:09:58.261 real 0m0.365s 00:09:58.261 user 0m0.169s 00:09:58.261 sys 0m0.153s 00:09:58.261 10:21:57 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.261 10:21:57 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 ************************************ 00:09:58.261 END TEST nvme_sgl 00:09:58.261 ************************************ 00:09:58.537 10:21:57 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:58.537 10:21:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.537 10:21:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.537 10:21:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.537 ************************************ 00:09:58.537 START TEST nvme_e2edp 00:09:58.537 ************************************ 00:09:58.537 10:21:57 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:58.810 NVMe Write/Read with End-to-End data protection test 00:09:58.810 Attached to 0000:00:10.0 00:09:58.810 Attached to 0000:00:11.0 00:09:58.810 Attached to 0000:00:13.0 00:09:58.810 Attached to 0000:00:12.0 00:09:58.810 Cleaning up... 00:09:58.810 00:09:58.810 real 0m0.281s 00:09:58.810 user 0m0.090s 00:09:58.810 sys 0m0.143s 00:09:58.810 10:21:57 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.810 10:21:57 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 ************************************ 00:09:58.810 END TEST nvme_e2edp 00:09:58.810 ************************************ 00:09:58.810 10:21:58 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:58.810 10:21:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.810 10:21:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.810 10:21:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 ************************************ 00:09:58.810 START TEST nvme_reserve 00:09:58.810 ************************************ 00:09:58.810 10:21:58 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:59.068 ===================================================== 00:09:59.069 NVMe Controller at PCI bus 0, device 16, function 0 00:09:59.069 ===================================================== 00:09:59.069 Reservations: Not Supported 00:09:59.069 ===================================================== 00:09:59.069 NVMe Controller at PCI bus 0, device 17, function 0 00:09:59.069 ===================================================== 00:09:59.069 Reservations: Not Supported 00:09:59.069 ===================================================== 00:09:59.069 NVMe Controller at PCI bus 0, device 19, function 0 00:09:59.069 ===================================================== 00:09:59.069 Reservations: Not Supported 00:09:59.069 ===================================================== 00:09:59.069 NVMe Controller at PCI bus 0, device 18, function 0 00:09:59.069 ===================================================== 00:09:59.069 Reservations: Not Supported 00:09:59.069 Reservation test passed 00:09:59.069 ************************************ 00:09:59.069 END TEST nvme_reserve 00:09:59.069 ************************************ 00:09:59.069 00:09:59.069 real 0m0.297s 00:09:59.069 user 0m0.090s 00:09:59.069 sys 0m0.163s 00:09:59.069 10:21:58 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.069 10:21:58 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:59.069 10:21:58 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:59.069 10:21:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.069 10:21:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.069 10:21:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.069 ************************************ 00:09:59.069 START TEST nvme_err_injection 00:09:59.069 ************************************ 00:09:59.069 10:21:58 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:59.636 NVMe Error Injection test 00:09:59.636 Attached to 0000:00:10.0 00:09:59.636 Attached to 0000:00:11.0 00:09:59.636 Attached to 0000:00:13.0 00:09:59.636 Attached to 0000:00:12.0 00:09:59.636 0000:00:10.0: get features failed as expected 00:09:59.636 0000:00:11.0: get features failed as expected 00:09:59.636 0000:00:13.0: get features failed as expected 00:09:59.636 0000:00:12.0: get features failed as expected 00:09:59.636 0000:00:11.0: get features successfully as expected 00:09:59.636 0000:00:13.0: get features successfully as expected 00:09:59.636 0000:00:12.0: get features successfully as expected 00:09:59.636 0000:00:10.0: get features successfully as expected 00:09:59.636 0000:00:10.0: read failed as expected 00:09:59.636 0000:00:11.0: read failed as expected 00:09:59.636 0000:00:13.0: read failed as expected 00:09:59.636 0000:00:12.0: read failed as expected 00:09:59.636 0000:00:11.0: read successfully as expected 00:09:59.636 0000:00:13.0: read successfully as expected 00:09:59.636 0000:00:12.0: read successfully as expected 00:09:59.636 0000:00:10.0: read successfully as expected 00:09:59.636 Cleaning up... 00:09:59.636 00:09:59.636 real 0m0.310s 00:09:59.636 user 0m0.116s 00:09:59.636 sys 0m0.148s 00:09:59.636 10:21:58 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.636 ************************************ 00:09:59.636 END TEST nvme_err_injection 00:09:59.636 ************************************ 00:09:59.636 10:21:58 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:59.636 10:21:58 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:59.636 10:21:58 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:59.636 10:21:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.636 10:21:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.636 ************************************ 00:09:59.636 START TEST nvme_overhead 00:09:59.636 ************************************ 00:09:59.636 10:21:58 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:01.013 Initializing NVMe Controllers 00:10:01.013 Attached to 0000:00:10.0 00:10:01.013 Attached to 0000:00:11.0 00:10:01.013 Attached to 0000:00:13.0 00:10:01.013 Attached to 0000:00:12.0 00:10:01.013 Initialization complete. Launching workers. 00:10:01.013 submit (in ns) avg, min, max = 13620.6, 11094.8, 121412.0 00:10:01.013 complete (in ns) avg, min, max = 8187.2, 7776.7, 64440.2 00:10:01.013 00:10:01.013 Submit histogram 00:10:01.013 ================ 00:10:01.013 Range in us Cumulative Count 00:10:01.014 11.052 - 11.104: 0.0168% ( 1) 00:10:01.014 11.720 - 11.772: 0.0337% ( 1) 00:10:01.014 12.646 - 12.697: 0.0505% ( 1) 00:10:01.014 12.697 - 12.749: 0.1179% ( 4) 00:10:01.014 12.749 - 12.800: 0.6569% ( 32) 00:10:01.014 12.800 - 12.851: 1.7854% ( 67) 00:10:01.014 12.851 - 12.903: 5.4236% ( 216) 00:10:01.014 12.903 - 12.954: 11.1504% ( 340) 00:10:01.014 12.954 - 13.006: 19.9090% ( 520) 00:10:01.014 13.006 - 13.057: 29.0045% ( 540) 00:10:01.014 13.057 - 13.108: 37.4937% ( 504) 00:10:01.014 13.108 - 13.160: 44.3153% ( 405) 00:10:01.014 13.160 - 13.263: 56.7458% ( 738) 00:10:01.014 13.263 - 13.365: 67.3573% ( 630) 00:10:01.014 13.365 - 13.468: 76.4191% ( 538) 00:10:01.014 13.468 - 13.571: 82.3648% ( 353) 00:10:01.014 13.571 - 13.674: 86.4241% ( 241) 00:10:01.014 13.674 - 13.777: 89.4223% ( 178) 00:10:01.014 13.777 - 13.880: 91.3256% ( 113) 00:10:01.014 13.880 - 13.982: 92.8920% ( 93) 00:10:01.014 13.982 - 14.085: 93.6837% ( 47) 00:10:01.014 14.085 - 14.188: 93.8858% ( 12) 00:10:01.014 14.188 - 14.291: 94.1385% ( 15) 00:10:01.014 14.291 - 14.394: 94.3069% ( 10) 00:10:01.014 14.394 - 14.496: 94.3574% ( 3) 00:10:01.014 14.496 - 14.599: 94.4080% ( 3) 00:10:01.014 14.805 - 14.908: 94.4248% ( 1) 00:10:01.014 14.908 - 15.010: 94.4753% ( 3) 00:10:01.014 15.010 - 15.113: 94.5090% ( 2) 00:10:01.014 15.113 - 15.216: 94.5427% ( 2) 00:10:01.014 15.319 - 15.422: 94.5595% ( 1) 00:10:01.014 15.730 - 15.833: 94.5764% ( 1) 00:10:01.014 15.833 - 15.936: 94.5932% ( 1) 00:10:01.014 15.936 - 16.039: 94.6101% ( 1) 00:10:01.014 16.141 - 16.244: 94.6438% ( 2) 00:10:01.014 16.244 - 16.347: 94.6606% ( 1) 00:10:01.014 16.347 - 16.450: 94.6774% ( 1) 00:10:01.014 16.553 - 16.655: 94.7111% ( 2) 00:10:01.014 16.758 - 16.861: 94.7280% ( 1) 00:10:01.014 16.861 - 16.964: 94.7785% ( 3) 00:10:01.014 16.964 - 17.067: 94.8796% ( 6) 00:10:01.014 17.067 - 17.169: 95.0480% ( 10) 00:10:01.014 17.169 - 17.272: 95.2164% ( 10) 00:10:01.014 17.272 - 17.375: 95.5028% ( 17) 00:10:01.014 17.375 - 17.478: 95.8396% ( 20) 00:10:01.014 17.478 - 17.581: 96.1597% ( 19) 00:10:01.014 17.581 - 17.684: 96.4292% ( 16) 00:10:01.014 17.684 - 17.786: 96.6650% ( 14) 00:10:01.014 17.786 - 17.889: 96.7829% ( 7) 00:10:01.014 17.889 - 17.992: 96.9008% ( 7) 00:10:01.014 17.992 - 18.095: 97.0019% ( 6) 00:10:01.014 18.095 - 18.198: 97.0692% ( 4) 00:10:01.014 18.198 - 18.300: 97.1198% ( 3) 00:10:01.014 18.300 - 18.403: 97.3387% ( 13) 00:10:01.014 18.403 - 18.506: 97.4229% ( 5) 00:10:01.014 18.506 - 18.609: 97.5240% ( 6) 00:10:01.014 18.609 - 18.712: 97.7430% ( 13) 00:10:01.014 18.712 - 18.814: 97.8272% ( 5) 00:10:01.014 18.814 - 18.917: 97.9114% ( 5) 00:10:01.014 18.917 - 19.020: 98.0462% ( 8) 00:10:01.014 19.020 - 19.123: 98.1304% ( 5) 00:10:01.014 19.123 - 19.226: 98.1809% ( 3) 00:10:01.014 19.226 - 19.329: 98.2483% ( 4) 00:10:01.014 19.329 - 19.431: 98.3156% ( 4) 00:10:01.014 19.431 - 19.534: 98.3999% ( 5) 00:10:01.014 19.534 - 19.637: 98.5851% ( 11) 00:10:01.014 19.637 - 19.740: 98.6694% ( 5) 00:10:01.014 19.740 - 19.843: 98.7199% ( 3) 00:10:01.014 19.843 - 19.945: 98.8378% ( 7) 00:10:01.014 19.945 - 20.048: 98.9894% ( 9) 00:10:01.014 20.048 - 20.151: 99.0736% ( 5) 00:10:01.014 20.151 - 20.254: 99.1578% ( 5) 00:10:01.014 20.254 - 20.357: 99.2084% ( 3) 00:10:01.014 20.357 - 20.459: 99.2252% ( 1) 00:10:01.014 20.459 - 20.562: 99.2420% ( 1) 00:10:01.014 20.562 - 20.665: 99.2589% ( 1) 00:10:01.014 20.665 - 20.768: 99.2757% ( 1) 00:10:01.014 20.973 - 21.076: 99.2926% ( 1) 00:10:01.014 21.282 - 21.385: 99.3094% ( 1) 00:10:01.014 21.488 - 21.590: 99.3599% ( 3) 00:10:01.014 21.590 - 21.693: 99.3768% ( 1) 00:10:01.014 21.693 - 21.796: 99.3936% ( 1) 00:10:01.014 21.796 - 21.899: 99.4105% ( 1) 00:10:01.014 22.104 - 22.207: 99.4442% ( 2) 00:10:01.014 22.310 - 22.413: 99.4779% ( 2) 00:10:01.014 22.516 - 22.618: 99.4947% ( 1) 00:10:01.014 22.721 - 22.824: 99.5115% ( 1) 00:10:01.014 22.824 - 22.927: 99.5284% ( 1) 00:10:01.014 23.030 - 23.133: 99.5452% ( 1) 00:10:01.014 23.133 - 23.235: 99.5958% ( 3) 00:10:01.014 23.235 - 23.338: 99.6126% ( 1) 00:10:01.014 23.441 - 23.544: 99.6294% ( 1) 00:10:01.014 23.647 - 23.749: 99.6463% ( 1) 00:10:01.014 23.852 - 23.955: 99.6631% ( 1) 00:10:01.014 25.806 - 25.908: 99.6800% ( 1) 00:10:01.014 26.011 - 26.114: 99.6968% ( 1) 00:10:01.014 26.937 - 27.142: 99.7137% ( 1) 00:10:01.014 28.582 - 28.787: 99.7305% ( 1) 00:10:01.014 30.638 - 30.843: 99.7473% ( 1) 00:10:01.014 30.843 - 31.049: 99.7642% ( 1) 00:10:01.014 31.460 - 31.666: 99.7810% ( 1) 00:10:01.014 32.283 - 32.488: 99.8147% ( 2) 00:10:01.014 33.311 - 33.516: 99.8316% ( 1) 00:10:01.014 37.012 - 37.218: 99.8484% ( 1) 00:10:01.014 39.480 - 39.685: 99.8653% ( 1) 00:10:01.014 39.685 - 39.891: 99.8821% ( 1) 00:10:01.014 39.891 - 40.096: 99.8989% ( 1) 00:10:01.014 46.059 - 46.265: 99.9158% ( 1) 00:10:01.014 52.639 - 53.051: 99.9326% ( 1) 00:10:01.014 53.051 - 53.462: 99.9495% ( 1) 00:10:01.014 65.799 - 66.210: 99.9663% ( 1) 00:10:01.014 90.885 - 91.296: 99.9832% ( 1) 00:10:01.014 120.906 - 121.729: 100.0000% ( 1) 00:10:01.014 00:10:01.014 Complete histogram 00:10:01.014 ================== 00:10:01.014 Range in us Cumulative Count 00:10:01.014 7.762 - 7.814: 0.5558% ( 33) 00:10:01.014 7.814 - 7.865: 8.3712% ( 464) 00:10:01.014 7.865 - 7.916: 26.6296% ( 1084) 00:10:01.014 7.916 - 7.968: 50.4127% ( 1412) 00:10:01.014 7.968 - 8.019: 66.8688% ( 977) 00:10:01.014 8.019 - 8.071: 76.6380% ( 580) 00:10:01.014 8.071 - 8.122: 82.7185% ( 361) 00:10:01.014 8.122 - 8.173: 87.7379% ( 298) 00:10:01.014 8.173 - 8.225: 91.5446% ( 226) 00:10:01.014 8.225 - 8.276: 93.2963% ( 104) 00:10:01.014 8.276 - 8.328: 94.4585% ( 69) 00:10:01.014 8.328 - 8.379: 94.9806% ( 31) 00:10:01.014 8.379 - 8.431: 95.3680% ( 23) 00:10:01.014 8.431 - 8.482: 95.6038% ( 14) 00:10:01.014 8.482 - 8.533: 95.8060% ( 12) 00:10:01.014 8.533 - 8.585: 96.0755% ( 16) 00:10:01.014 8.585 - 8.636: 96.4292% ( 21) 00:10:01.014 8.636 - 8.688: 96.8334% ( 24) 00:10:01.014 8.688 - 8.739: 97.2545% ( 25) 00:10:01.014 8.739 - 8.790: 97.5408% ( 17) 00:10:01.014 8.790 - 8.842: 97.6419% ( 6) 00:10:01.014 8.842 - 8.893: 97.7093% ( 4) 00:10:01.014 8.893 - 8.945: 97.7430% ( 2) 00:10:01.014 8.945 - 8.996: 97.8272% ( 5) 00:10:01.014 8.996 - 9.047: 97.8440% ( 1) 00:10:01.014 9.099 - 9.150: 97.8777% ( 2) 00:10:01.014 9.356 - 9.407: 97.8946% ( 1) 00:10:01.014 9.407 - 9.459: 97.9114% ( 1) 00:10:01.014 9.613 - 9.664: 97.9451% ( 2) 00:10:01.014 9.716 - 9.767: 97.9619% ( 1) 00:10:01.014 10.024 - 10.076: 97.9788% ( 1) 00:10:01.014 10.127 - 10.178: 97.9956% ( 1) 00:10:01.014 10.384 - 10.435: 98.0125% ( 1) 00:10:01.014 10.435 - 10.487: 98.0293% ( 1) 00:10:01.014 10.641 - 10.692: 98.0462% ( 1) 00:10:01.014 10.949 - 11.001: 98.0630% ( 1) 00:10:01.014 11.155 - 11.206: 98.0798% ( 1) 00:10:01.014 11.309 - 11.361: 98.0967% ( 1) 00:10:01.014 11.515 - 11.566: 98.1135% ( 1) 00:10:01.014 11.618 - 11.669: 98.1472% ( 2) 00:10:01.014 11.875 - 11.926: 98.1641% ( 1) 00:10:01.014 12.029 - 12.080: 98.1809% ( 1) 00:10:01.014 12.132 - 12.183: 98.1977% ( 1) 00:10:01.014 12.286 - 12.337: 98.2146% ( 1) 00:10:01.014 13.160 - 13.263: 98.3156% ( 6) 00:10:01.014 13.263 - 13.365: 98.4504% ( 8) 00:10:01.014 13.365 - 13.468: 98.5683% ( 7) 00:10:01.014 13.468 - 13.571: 98.6525% ( 5) 00:10:01.014 13.571 - 13.674: 98.7199% ( 4) 00:10:01.014 13.674 - 13.777: 98.7873% ( 4) 00:10:01.014 13.777 - 13.880: 98.8210% ( 2) 00:10:01.014 13.880 - 13.982: 98.8715% ( 3) 00:10:01.014 13.982 - 14.085: 98.9894% ( 7) 00:10:01.014 14.085 - 14.188: 99.0568% ( 4) 00:10:01.014 14.188 - 14.291: 99.1241% ( 4) 00:10:01.014 14.291 - 14.394: 99.2589% ( 8) 00:10:01.014 14.394 - 14.496: 99.2757% ( 1) 00:10:01.014 14.496 - 14.599: 99.3263% ( 3) 00:10:01.014 14.599 - 14.702: 99.3599% ( 2) 00:10:01.014 14.805 - 14.908: 99.3936% ( 2) 00:10:01.014 15.010 - 15.113: 99.4273% ( 2) 00:10:01.014 15.113 - 15.216: 99.4779% ( 3) 00:10:01.014 15.216 - 15.319: 99.4947% ( 1) 00:10:01.014 15.319 - 15.422: 99.5115% ( 1) 00:10:01.015 15.627 - 15.730: 99.5621% ( 3) 00:10:01.015 15.833 - 15.936: 99.5789% ( 1) 00:10:01.015 16.039 - 16.141: 99.5958% ( 1) 00:10:01.015 16.141 - 16.244: 99.6126% ( 1) 00:10:01.015 16.655 - 16.758: 99.6294% ( 1) 00:10:01.015 18.300 - 18.403: 99.6463% ( 1) 00:10:01.015 19.431 - 19.534: 99.6631% ( 1) 00:10:01.015 20.562 - 20.665: 99.6800% ( 1) 00:10:01.015 21.282 - 21.385: 99.6968% ( 1) 00:10:01.015 21.385 - 21.488: 99.7137% ( 1) 00:10:01.015 22.927 - 23.030: 99.7305% ( 1) 00:10:01.015 23.441 - 23.544: 99.7642% ( 2) 00:10:01.015 24.161 - 24.263: 99.7810% ( 1) 00:10:01.015 24.572 - 24.675: 99.7979% ( 1) 00:10:01.015 26.731 - 26.937: 99.8147% ( 1) 00:10:01.015 27.348 - 27.553: 99.8316% ( 1) 00:10:01.015 27.759 - 27.965: 99.8484% ( 1) 00:10:01.015 29.815 - 30.021: 99.8653% ( 1) 00:10:01.015 31.049 - 31.255: 99.8821% ( 1) 00:10:01.015 33.311 - 33.516: 99.8989% ( 1) 00:10:01.015 34.750 - 34.956: 99.9158% ( 1) 00:10:01.015 45.443 - 45.648: 99.9326% ( 1) 00:10:01.015 45.648 - 45.854: 99.9495% ( 1) 00:10:01.015 50.994 - 51.200: 99.9663% ( 1) 00:10:01.015 51.406 - 51.611: 99.9832% ( 1) 00:10:01.015 64.154 - 64.565: 100.0000% ( 1) 00:10:01.015 00:10:01.015 ************************************ 00:10:01.015 END TEST nvme_overhead 00:10:01.015 ************************************ 00:10:01.015 00:10:01.015 real 0m1.305s 00:10:01.015 user 0m1.112s 00:10:01.015 sys 0m0.146s 00:10:01.015 10:22:00 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.015 10:22:00 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:01.015 10:22:00 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:01.015 10:22:00 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:01.015 10:22:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.015 10:22:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:01.015 ************************************ 00:10:01.015 START TEST nvme_arbitration 00:10:01.015 ************************************ 00:10:01.015 10:22:00 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:04.295 Initializing NVMe Controllers 00:10:04.295 Attached to 0000:00:10.0 00:10:04.295 Attached to 0000:00:11.0 00:10:04.295 Attached to 0000:00:13.0 00:10:04.295 Attached to 0000:00:12.0 00:10:04.295 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:04.295 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:04.295 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:04.295 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:04.295 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:04.295 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:04.295 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:04.295 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:04.295 Initialization complete. Launching workers. 00:10:04.295 Starting thread on core 1 with urgent priority queue 00:10:04.295 Starting thread on core 2 with urgent priority queue 00:10:04.295 Starting thread on core 3 with urgent priority queue 00:10:04.295 Starting thread on core 0 with urgent priority queue 00:10:04.295 QEMU NVMe Ctrl (12340 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:10:04.295 QEMU NVMe Ctrl (12342 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:10:04.295 QEMU NVMe Ctrl (12341 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:10:04.295 QEMU NVMe Ctrl (12342 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:10:04.295 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:10:04.295 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:10:04.295 ======================================================== 00:10:04.295 00:10:04.295 00:10:04.295 real 0m3.435s 00:10:04.295 user 0m9.409s 00:10:04.295 sys 0m0.162s 00:10:04.295 ************************************ 00:10:04.295 END TEST nvme_arbitration 00:10:04.295 ************************************ 00:10:04.295 10:22:03 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.295 10:22:03 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:04.554 10:22:03 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:04.554 10:22:03 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.554 10:22:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.554 10:22:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.554 ************************************ 00:10:04.554 START TEST nvme_single_aen 00:10:04.554 ************************************ 00:10:04.554 10:22:03 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:04.812 Asynchronous Event Request test 00:10:04.812 Attached to 0000:00:10.0 00:10:04.812 Attached to 0000:00:11.0 00:10:04.812 Attached to 0000:00:13.0 00:10:04.812 Attached to 0000:00:12.0 00:10:04.812 Reset controller to setup AER completions for this process 00:10:04.812 Registering asynchronous event callbacks... 00:10:04.812 Getting orig temperature thresholds of all controllers 00:10:04.812 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.812 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.812 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.812 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:04.812 Setting all controllers temperature threshold low to trigger AER 00:10:04.812 Waiting for all controllers temperature threshold to be set lower 00:10:04.812 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.812 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:04.812 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.812 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:04.812 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.812 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:04.812 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:04.812 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:04.812 Waiting for all controllers to trigger AER and reset threshold 00:10:04.812 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.812 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.812 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.813 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.813 Cleaning up... 00:10:04.813 00:10:04.813 real 0m0.330s 00:10:04.813 user 0m0.115s 00:10:04.813 sys 0m0.171s 00:10:04.813 10:22:03 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.813 10:22:03 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:04.813 ************************************ 00:10:04.813 END TEST nvme_single_aen 00:10:04.813 ************************************ 00:10:04.813 10:22:04 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:04.813 10:22:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.813 10:22:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.813 10:22:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.813 ************************************ 00:10:04.813 START TEST nvme_doorbell_aers 00:10:04.813 ************************************ 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:04.813 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:05.071 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:05.071 10:22:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:05.071 10:22:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:05.071 10:22:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:05.330 [2024-12-07 10:22:04.494131] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:15.294 Executing: test_write_invalid_db 00:10:15.294 Waiting for AER completion... 00:10:15.294 Failure: test_write_invalid_db 00:10:15.294 00:10:15.294 Executing: test_invalid_db_write_overflow_sq 00:10:15.294 Waiting for AER completion... 00:10:15.294 Failure: test_invalid_db_write_overflow_sq 00:10:15.294 00:10:15.294 Executing: test_invalid_db_write_overflow_cq 00:10:15.294 Waiting for AER completion... 00:10:15.294 Failure: test_invalid_db_write_overflow_cq 00:10:15.294 00:10:15.294 10:22:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:15.294 10:22:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:15.294 [2024-12-07 10:22:14.534449] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:25.260 Executing: test_write_invalid_db 00:10:25.260 Waiting for AER completion... 00:10:25.260 Failure: test_write_invalid_db 00:10:25.260 00:10:25.260 Executing: test_invalid_db_write_overflow_sq 00:10:25.260 Waiting for AER completion... 00:10:25.260 Failure: test_invalid_db_write_overflow_sq 00:10:25.260 00:10:25.260 Executing: test_invalid_db_write_overflow_cq 00:10:25.260 Waiting for AER completion... 00:10:25.260 Failure: test_invalid_db_write_overflow_cq 00:10:25.260 00:10:25.260 10:22:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:25.260 10:22:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:25.260 [2024-12-07 10:22:24.599541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:35.224 Executing: test_write_invalid_db 00:10:35.224 Waiting for AER completion... 00:10:35.224 Failure: test_write_invalid_db 00:10:35.224 00:10:35.224 Executing: test_invalid_db_write_overflow_sq 00:10:35.224 Waiting for AER completion... 00:10:35.224 Failure: test_invalid_db_write_overflow_sq 00:10:35.224 00:10:35.224 Executing: test_invalid_db_write_overflow_cq 00:10:35.224 Waiting for AER completion... 00:10:35.224 Failure: test_invalid_db_write_overflow_cq 00:10:35.224 00:10:35.224 10:22:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:35.224 10:22:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:35.483 [2024-12-07 10:22:34.644672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 Executing: test_write_invalid_db 00:10:45.444 Waiting for AER completion... 00:10:45.444 Failure: test_write_invalid_db 00:10:45.444 00:10:45.444 Executing: test_invalid_db_write_overflow_sq 00:10:45.444 Waiting for AER completion... 00:10:45.444 Failure: test_invalid_db_write_overflow_sq 00:10:45.444 00:10:45.444 Executing: test_invalid_db_write_overflow_cq 00:10:45.444 Waiting for AER completion... 00:10:45.444 Failure: test_invalid_db_write_overflow_cq 00:10:45.444 00:10:45.444 ************************************ 00:10:45.444 END TEST nvme_doorbell_aers 00:10:45.444 ************************************ 00:10:45.444 00:10:45.444 real 0m40.330s 00:10:45.444 user 0m28.431s 00:10:45.444 sys 0m11.487s 00:10:45.444 10:22:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.444 10:22:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:45.444 10:22:44 nvme -- nvme/nvme.sh@97 -- # uname 00:10:45.444 10:22:44 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:45.444 10:22:44 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:45.444 10:22:44 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:45.444 10:22:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.444 10:22:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:45.444 ************************************ 00:10:45.444 START TEST nvme_multi_aen 00:10:45.444 ************************************ 00:10:45.444 10:22:44 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:45.444 [2024-12-07 10:22:44.725161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.725445] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.725468] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.727305] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.727353] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.727369] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.728809] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.728984] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.729008] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.730423] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.730466] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 [2024-12-07 10:22:44.730481] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request. 00:10:45.444 Child process pid: 64996 00:10:45.701 [Child] Asynchronous Event Request test 00:10:45.701 [Child] Attached to 0000:00:10.0 00:10:45.701 [Child] Attached to 0000:00:11.0 00:10:45.701 [Child] Attached to 0000:00:13.0 00:10:45.701 [Child] Attached to 0000:00:12.0 00:10:45.701 [Child] Registering asynchronous event callbacks... 00:10:45.701 [Child] Getting orig temperature thresholds of all controllers 00:10:45.701 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.701 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.701 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.701 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.701 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:45.701 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.701 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.701 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.701 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.701 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.701 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.701 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.701 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.701 [Child] Cleaning up... 00:10:45.958 Asynchronous Event Request test 00:10:45.958 Attached to 0000:00:10.0 00:10:45.958 Attached to 0000:00:11.0 00:10:45.958 Attached to 0000:00:13.0 00:10:45.958 Attached to 0000:00:12.0 00:10:45.958 Reset controller to setup AER completions for this process 00:10:45.958 Registering asynchronous event callbacks... 00:10:45.958 Getting orig temperature thresholds of all controllers 00:10:45.958 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.958 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.958 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.958 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:45.958 Setting all controllers temperature threshold low to trigger AER 00:10:45.958 Waiting for all controllers temperature threshold to be set lower 00:10:45.958 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.958 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:45.958 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.958 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:45.958 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.958 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:45.958 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:45.958 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:45.958 Waiting for all controllers to trigger AER and reset threshold 00:10:45.958 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.958 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.958 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.958 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.958 Cleaning up... 00:10:45.958 00:10:45.958 real 0m0.629s 00:10:45.958 user 0m0.223s 00:10:45.958 sys 0m0.300s 00:10:45.959 10:22:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.959 10:22:45 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:45.959 ************************************ 00:10:45.959 END TEST nvme_multi_aen 00:10:45.959 ************************************ 00:10:45.959 10:22:45 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:45.959 10:22:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.959 10:22:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.959 10:22:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:45.959 ************************************ 00:10:45.959 START TEST nvme_startup 00:10:45.959 ************************************ 00:10:45.959 10:22:45 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:46.214 Initializing NVMe Controllers 00:10:46.214 Attached to 0000:00:10.0 00:10:46.214 Attached to 0000:00:11.0 00:10:46.214 Attached to 0000:00:13.0 00:10:46.214 Attached to 0000:00:12.0 00:10:46.214 Initialization complete. 00:10:46.214 Time used:181198.250 (us). 00:10:46.214 00:10:46.214 real 0m0.280s 00:10:46.214 user 0m0.095s 00:10:46.214 sys 0m0.139s 00:10:46.214 10:22:45 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.214 10:22:45 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:46.214 ************************************ 00:10:46.214 END TEST nvme_startup 00:10:46.214 ************************************ 00:10:46.215 10:22:45 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:46.215 10:22:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.215 10:22:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.215 10:22:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.215 ************************************ 00:10:46.215 START TEST nvme_multi_secondary 00:10:46.215 ************************************ 00:10:46.215 10:22:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:46.215 10:22:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65048 00:10:46.215 10:22:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:46.215 10:22:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65049 00:10:46.215 10:22:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:46.215 10:22:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:50.403 Initializing NVMe Controllers 00:10:50.403 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:50.403 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:50.403 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:50.403 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:50.403 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:50.403 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:50.403 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:50.403 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:50.403 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:50.403 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:50.403 Initialization complete. Launching workers. 00:10:50.403 ======================================================== 00:10:50.403 Latency(us) 00:10:50.403 Device Information : IOPS MiB/s Average min max 00:10:50.403 PCIE (0000:00:10.0) NSID 1 from core 2: 3120.66 12.19 5125.84 1325.58 16185.33 00:10:50.403 PCIE (0000:00:11.0) NSID 1 from core 2: 3120.66 12.19 5126.55 1327.11 15977.46 00:10:50.403 PCIE (0000:00:13.0) NSID 1 from core 2: 3120.66 12.19 5120.46 1339.01 14990.59 00:10:50.403 PCIE (0000:00:12.0) NSID 1 from core 2: 3120.66 12.19 5119.57 1315.32 13686.42 00:10:50.403 PCIE (0000:00:12.0) NSID 2 from core 2: 3120.66 12.19 5119.47 1225.76 14392.30 00:10:50.403 PCIE (0000:00:12.0) NSID 3 from core 2: 3120.66 12.19 5119.42 1381.90 13778.55 00:10:50.403 ======================================================== 00:10:50.403 Total : 18723.99 73.14 5121.89 1225.76 16185.33 00:10:50.403 00:10:50.403 10:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65048 00:10:50.403 Initializing NVMe Controllers 00:10:50.403 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:50.403 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:50.403 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:50.403 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:50.403 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:50.403 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:50.403 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:50.403 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:50.403 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:50.403 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:50.403 Initialization complete. Launching workers. 00:10:50.403 ======================================================== 00:10:50.403 Latency(us) 00:10:50.403 Device Information : IOPS MiB/s Average min max 00:10:50.403 PCIE (0000:00:10.0) NSID 1 from core 1: 4971.74 19.42 3215.69 1745.77 7372.60 00:10:50.403 PCIE (0000:00:11.0) NSID 1 from core 1: 4971.74 19.42 3217.61 1728.87 7470.40 00:10:50.403 PCIE (0000:00:13.0) NSID 1 from core 1: 4971.74 19.42 3217.63 1715.18 7231.71 00:10:50.403 PCIE (0000:00:12.0) NSID 1 from core 1: 4971.74 19.42 3217.66 1590.45 7467.03 00:10:50.403 PCIE (0000:00:12.0) NSID 2 from core 1: 4971.74 19.42 3217.77 1545.95 7420.66 00:10:50.403 PCIE (0000:00:12.0) NSID 3 from core 1: 4971.74 19.42 3217.78 1589.16 7409.50 00:10:50.403 ======================================================== 00:10:50.403 Total : 29830.45 116.53 3217.36 1545.95 7470.40 00:10:50.403 00:10:51.800 Initializing NVMe Controllers 00:10:51.800 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:51.800 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:51.800 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:51.800 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:51.800 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:51.800 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:51.800 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:51.800 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:51.800 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:51.800 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:51.800 Initialization complete. Launching workers. 00:10:51.800 ======================================================== 00:10:51.800 Latency(us) 00:10:51.800 Device Information : IOPS MiB/s Average min max 00:10:51.800 PCIE (0000:00:10.0) NSID 1 from core 0: 8402.15 32.82 1902.72 915.34 7238.34 00:10:51.800 PCIE (0000:00:11.0) NSID 1 from core 0: 8402.15 32.82 1903.79 921.75 7796.06 00:10:51.800 PCIE (0000:00:13.0) NSID 1 from core 0: 8402.15 32.82 1903.76 875.26 7043.68 00:10:51.800 PCIE (0000:00:12.0) NSID 1 from core 0: 8402.15 32.82 1903.73 800.67 6962.40 00:10:51.800 PCIE (0000:00:12.0) NSID 2 from core 0: 8402.15 32.82 1903.70 766.29 7774.16 00:10:51.800 PCIE (0000:00:12.0) NSID 3 from core 0: 8405.35 32.83 1902.94 696.31 7657.48 00:10:51.800 ======================================================== 00:10:51.800 Total : 50416.13 196.94 1903.44 696.31 7796.06 00:10:51.800 00:10:51.800 10:22:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65049 00:10:51.800 10:22:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65118 00:10:51.800 10:22:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:51.800 10:22:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65119 00:10:51.800 10:22:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:51.800 10:22:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:55.090 Initializing NVMe Controllers 00:10:55.090 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:55.090 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:55.090 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:55.090 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:55.090 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:55.090 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:55.090 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:55.090 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:55.090 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:55.090 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:55.090 Initialization complete. Launching workers. 00:10:55.090 ======================================================== 00:10:55.090 Latency(us) 00:10:55.090 Device Information : IOPS MiB/s Average min max 00:10:55.090 PCIE (0000:00:10.0) NSID 1 from core 0: 5472.08 21.38 2921.78 923.38 7042.76 00:10:55.090 PCIE (0000:00:11.0) NSID 1 from core 0: 5472.08 21.38 2923.63 957.64 7027.14 00:10:55.090 PCIE (0000:00:13.0) NSID 1 from core 0: 5472.08 21.38 2923.81 948.31 7374.58 00:10:55.090 PCIE (0000:00:12.0) NSID 1 from core 0: 5472.08 21.38 2923.97 946.39 7051.88 00:10:55.090 PCIE (0000:00:12.0) NSID 2 from core 0: 5472.08 21.38 2924.27 952.01 6979.39 00:10:55.090 PCIE (0000:00:12.0) NSID 3 from core 0: 5477.41 21.40 2921.55 958.80 6862.79 00:10:55.090 ======================================================== 00:10:55.090 Total : 32837.80 128.27 2923.17 923.38 7374.58 00:10:55.090 00:10:55.350 Initializing NVMe Controllers 00:10:55.350 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:55.350 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:55.350 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:55.350 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:55.350 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:55.350 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:55.350 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:55.350 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:55.350 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:55.350 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:55.350 Initialization complete. Launching workers. 00:10:55.350 ======================================================== 00:10:55.350 Latency(us) 00:10:55.350 Device Information : IOPS MiB/s Average min max 00:10:55.350 PCIE (0000:00:10.0) NSID 1 from core 1: 5213.95 20.37 3066.20 1012.81 6010.03 00:10:55.350 PCIE (0000:00:11.0) NSID 1 from core 1: 5213.95 20.37 3067.95 1023.07 5792.29 00:10:55.350 PCIE (0000:00:13.0) NSID 1 from core 1: 5213.95 20.37 3067.91 1018.50 5731.09 00:10:55.350 PCIE (0000:00:12.0) NSID 1 from core 1: 5213.95 20.37 3067.87 1003.41 5694.60 00:10:55.350 PCIE (0000:00:12.0) NSID 2 from core 1: 5213.95 20.37 3067.86 1024.37 6062.89 00:10:55.351 PCIE (0000:00:12.0) NSID 3 from core 1: 5213.95 20.37 3067.82 1001.93 5928.77 00:10:55.351 ======================================================== 00:10:55.351 Total : 31283.73 122.20 3067.60 1001.93 6062.89 00:10:55.351 00:10:57.254 Initializing NVMe Controllers 00:10:57.254 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:57.254 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:57.254 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:57.255 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:57.255 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:57.255 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:57.255 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:57.255 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:57.255 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:57.255 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:57.255 Initialization complete. Launching workers. 00:10:57.255 ======================================================== 00:10:57.255 Latency(us) 00:10:57.255 Device Information : IOPS MiB/s Average min max 00:10:57.255 PCIE (0000:00:10.0) NSID 1 from core 2: 3368.43 13.16 4748.37 1015.00 11549.15 00:10:57.255 PCIE (0000:00:11.0) NSID 1 from core 2: 3368.43 13.16 4749.74 1012.02 10881.76 00:10:57.255 PCIE (0000:00:13.0) NSID 1 from core 2: 3368.43 13.16 4749.67 1035.24 10916.64 00:10:57.255 PCIE (0000:00:12.0) NSID 1 from core 2: 3368.43 13.16 4749.35 1019.99 11194.71 00:10:57.255 PCIE (0000:00:12.0) NSID 2 from core 2: 3368.43 13.16 4749.02 1043.57 10863.90 00:10:57.255 PCIE (0000:00:12.0) NSID 3 from core 2: 3368.43 13.16 4749.41 1032.35 10931.45 00:10:57.255 ======================================================== 00:10:57.255 Total : 20210.59 78.95 4749.26 1012.02 11549.15 00:10:57.255 00:10:57.255 10:22:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65118 00:10:57.255 10:22:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65119 00:10:57.255 00:10:57.255 real 0m10.867s 00:10:57.255 user 0m18.590s 00:10:57.255 sys 0m1.095s 00:10:57.255 10:22:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.255 10:22:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:57.255 ************************************ 00:10:57.255 END TEST nvme_multi_secondary 00:10:57.255 ************************************ 00:10:57.255 10:22:56 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:57.255 10:22:56 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:57.255 10:22:56 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64055 ]] 00:10:57.255 10:22:56 nvme -- common/autotest_common.sh@1094 -- # kill 64055 00:10:57.255 10:22:56 nvme -- common/autotest_common.sh@1095 -- # wait 64055 00:10:57.255 [2024-12-07 10:22:56.467867] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.468261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.468345] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.468395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.474914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.475041] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.475085] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.475152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.479674] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.479741] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.479768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.479798] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.483918] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.484005] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.484035] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.255 [2024-12-07 10:22:56.484066] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64993) is not found. Dropping the request. 00:10:57.514 10:22:56 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:57.514 10:22:56 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:57.514 10:22:56 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:57.514 10:22:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.514 10:22:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.514 10:22:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:57.514 ************************************ 00:10:57.514 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:57.514 ************************************ 00:10:57.514 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:57.514 * Looking for test storage... 00:10:57.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:57.514 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.514 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.514 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.777 --rc genhtml_branch_coverage=1 00:10:57.777 --rc genhtml_function_coverage=1 00:10:57.777 --rc genhtml_legend=1 00:10:57.777 --rc geninfo_all_blocks=1 00:10:57.777 --rc geninfo_unexecuted_blocks=1 00:10:57.777 00:10:57.777 ' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.777 --rc genhtml_branch_coverage=1 00:10:57.777 --rc genhtml_function_coverage=1 00:10:57.777 --rc genhtml_legend=1 00:10:57.777 --rc geninfo_all_blocks=1 00:10:57.777 --rc geninfo_unexecuted_blocks=1 00:10:57.777 00:10:57.777 ' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.777 --rc genhtml_branch_coverage=1 00:10:57.777 --rc genhtml_function_coverage=1 00:10:57.777 --rc genhtml_legend=1 00:10:57.777 --rc geninfo_all_blocks=1 00:10:57.777 --rc geninfo_unexecuted_blocks=1 00:10:57.777 00:10:57.777 ' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.777 --rc genhtml_branch_coverage=1 00:10:57.777 --rc genhtml_function_coverage=1 00:10:57.777 --rc genhtml_legend=1 00:10:57.777 --rc geninfo_all_blocks=1 00:10:57.777 --rc geninfo_unexecuted_blocks=1 00:10:57.777 00:10:57.777 ' 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:57.777 10:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65285 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65285 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65285 ']' 00:10:57.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.777 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.778 10:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 [2024-12-07 10:22:57.171213] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:58.054 [2024-12-07 10:22:57.171328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65285 ] 00:10:58.054 [2024-12-07 10:22:57.371804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.328 [2024-12-07 10:22:57.489317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.328 [2024-12-07 10:22:57.489486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.328 [2024-12-07 10:22:57.489667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.328 [2024-12-07 10:22:57.489880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 nvme0n1 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jZWcm.txt 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 true 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733566978 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65314 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:59.265 10:22:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:01.791 [2024-12-07 10:23:00.566138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:01.791 [2024-12-07 10:23:00.566646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:01.791 [2024-12-07 10:23:00.566787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:01.791 [2024-12-07 10:23:00.566900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.791 [2024-12-07 10:23:00.569183] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65314 00:11:01.791 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65314 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65314 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jZWcm.txt 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jZWcm.txt 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65285 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65285 ']' 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65285 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65285 00:11:01.791 killing process with pid 65285 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65285' 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65285 00:11:01.791 10:23:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65285 00:11:04.318 10:23:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:04.318 10:23:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:04.318 ************************************ 00:11:04.318 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:04.318 ************************************ 00:11:04.318 00:11:04.318 real 0m6.439s 00:11:04.318 user 0m22.179s 00:11:04.318 sys 0m0.922s 00:11:04.318 10:23:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.318 10:23:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:04.318 10:23:03 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:04.318 10:23:03 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:04.318 10:23:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.318 10:23:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.318 10:23:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.318 ************************************ 00:11:04.318 START TEST nvme_fio 00:11:04.318 ************************************ 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:04.318 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:04.318 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:04.577 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:04.577 10:23:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:04.577 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:04.835 10:23:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:05.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:05.093 fio-3.35 00:11:05.093 Starting 1 thread 00:11:08.378 00:11:08.378 test: (groupid=0, jobs=1): err= 0: pid=65465: Sat Dec 7 10:23:07 2024 00:11:08.378 read: IOPS=22.6k, BW=88.4MiB/s (92.7MB/s)(177MiB/2001msec) 00:11:08.378 slat (usec): min=3, max=106, avg= 4.50, stdev= 1.29 00:11:08.378 clat (usec): min=269, max=11949, avg=2817.26, stdev=400.50 00:11:08.378 lat (usec): min=273, max=12055, avg=2821.76, stdev=401.00 00:11:08.378 clat percentiles (usec): 00:11:08.378 | 1.00th=[ 2245], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:11:08.378 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:08.378 | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3097], 00:11:08.378 | 99.00th=[ 4555], 99.50th=[ 5145], 99.90th=[ 7570], 99.95th=[ 9372], 00:11:08.378 | 99.99th=[11731] 00:11:08.378 bw ( KiB/s): min=85568, max=93616, per=99.66%, avg=90181.33, stdev=4151.45, samples=3 00:11:08.378 iops : min=21392, max=23404, avg=22545.33, stdev=1037.86, samples=3 00:11:08.378 write: IOPS=22.5k, BW=87.9MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:11:08.378 slat (nsec): min=3800, max=39708, avg=4683.68, stdev=1205.77 00:11:08.378 clat (usec): min=276, max=11853, avg=2830.09, stdev=450.03 00:11:08.378 lat (usec): min=281, max=11867, avg=2834.77, stdev=450.43 00:11:08.378 clat percentiles (usec): 00:11:08.378 | 1.00th=[ 2311], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:11:08.378 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:08.378 | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3097], 00:11:08.378 | 99.00th=[ 4686], 99.50th=[ 5276], 99.90th=[ 9241], 99.95th=[10159], 00:11:08.378 | 99.99th=[11469] 00:11:08.378 bw ( KiB/s): min=85328, max=94832, per=100.00%, avg=90416.00, stdev=4787.50, samples=3 00:11:08.378 iops : min=21332, max=23708, avg=22604.00, stdev=1196.88, samples=3 00:11:08.378 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:11:08.379 lat (msec) : 2=0.60%, 4=97.64%, 10=1.67%, 20=0.05% 00:11:08.379 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=607 00:11:08.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:08.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.379 issued rwts: total=45265,45017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.379 00:11:08.379 Run status group 0 (all jobs): 00:11:08.379 READ: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=177MiB (185MB), run=2001-2001msec 00:11:08.379 WRITE: bw=87.9MiB/s (92.1MB/s), 87.9MiB/s-87.9MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:08.637 ----------------------------------------------------- 00:11:08.637 Suppressions used: 00:11:08.637 count bytes template 00:11:08.637 1 32 /usr/src/fio/parse.c 00:11:08.637 1 8 libtcmalloc_minimal.so 00:11:08.637 ----------------------------------------------------- 00:11:08.637 00:11:08.637 10:23:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:08.637 10:23:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:08.637 10:23:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:08.637 10:23:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:08.897 10:23:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:08.897 10:23:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:09.157 10:23:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:09.158 10:23:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:09.158 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:09.417 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:09.417 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:09.417 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:09.417 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:09.417 10:23:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:09.417 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:09.417 fio-3.35 00:11:09.417 Starting 1 thread 00:11:13.613 00:11:13.613 test: (groupid=0, jobs=1): err= 0: pid=65531: Sat Dec 7 10:23:12 2024 00:11:13.613 read: IOPS=21.8k, BW=85.3MiB/s (89.5MB/s)(171MiB/2001msec) 00:11:13.613 slat (nsec): min=3693, max=72441, avg=4505.24, stdev=1997.13 00:11:13.613 clat (usec): min=618, max=11516, avg=2919.04, stdev=345.52 00:11:13.613 lat (usec): min=631, max=11588, avg=2923.55, stdev=345.84 00:11:13.613 clat percentiles (usec): 00:11:13.613 | 1.00th=[ 2507], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:11:13.613 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2933], 00:11:13.613 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3261], 00:11:13.613 | 99.00th=[ 4293], 99.50th=[ 4817], 99.90th=[ 6128], 99.95th=[ 9110], 00:11:13.613 | 99.99th=[11207] 00:11:13.613 bw ( KiB/s): min=82304, max=89992, per=98.81%, avg=86346.67, stdev=3859.37, samples=3 00:11:13.613 iops : min=20576, max=22498, avg=21586.67, stdev=964.84, samples=3 00:11:13.613 write: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(170MiB/2001msec); 0 zone resets 00:11:13.613 slat (nsec): min=3769, max=40288, avg=4813.16, stdev=2098.88 00:11:13.613 clat (usec): min=718, max=11306, avg=2935.98, stdev=355.47 00:11:13.613 lat (usec): min=730, max=11330, avg=2940.79, stdev=355.76 00:11:13.613 clat percentiles (usec): 00:11:13.613 | 1.00th=[ 2507], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:11:13.613 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:13.613 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3163], 95.00th=[ 3294], 00:11:13.613 | 99.00th=[ 4424], 99.50th=[ 4817], 99.90th=[ 7046], 99.95th=[ 9372], 00:11:13.613 | 99.99th=[10945] 00:11:13.613 bw ( KiB/s): min=82080, max=90456, per=99.66%, avg=86466.67, stdev=4202.11, samples=3 00:11:13.613 iops : min=20520, max=22614, avg=21616.67, stdev=1050.53, samples=3 00:11:13.613 lat (usec) : 750=0.01%, 1000=0.01% 00:11:13.613 lat (msec) : 2=0.25%, 4=98.31%, 10=1.39%, 20=0.03% 00:11:13.613 cpu : usr=99.30%, sys=0.15%, ctx=4, majf=0, minf=607 00:11:13.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:13.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.614 issued rwts: total=43714,43403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.614 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.614 00:11:13.614 Run status group 0 (all jobs): 00:11:13.614 READ: bw=85.3MiB/s (89.5MB/s), 85.3MiB/s-85.3MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:11:13.614 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=170MiB (178MB), run=2001-2001msec 00:11:13.614 ----------------------------------------------------- 00:11:13.614 Suppressions used: 00:11:13.614 count bytes template 00:11:13.614 1 32 /usr/src/fio/parse.c 00:11:13.614 1 8 libtcmalloc_minimal.so 00:11:13.614 ----------------------------------------------------- 00:11:13.614 00:11:13.614 10:23:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:13.614 10:23:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:13.614 10:23:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:13.614 10:23:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:13.614 10:23:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:13.614 10:23:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:13.874 10:23:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:13.874 10:23:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:13.874 10:23:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:14.133 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:14.133 fio-3.35 00:11:14.133 Starting 1 thread 00:11:18.326 00:11:18.326 test: (groupid=0, jobs=1): err= 0: pid=65592: Sat Dec 7 10:23:17 2024 00:11:18.326 read: IOPS=21.4k, BW=83.6MiB/s (87.7MB/s)(167MiB/2001msec) 00:11:18.326 slat (usec): min=3, max=261, avg= 4.64, stdev= 2.57 00:11:18.326 clat (usec): min=176, max=10833, avg=2976.56, stdev=361.13 00:11:18.326 lat (usec): min=179, max=10915, avg=2981.21, stdev=361.48 00:11:18.326 clat percentiles (usec): 00:11:18.326 | 1.00th=[ 2507], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:11:18.326 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:11:18.326 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3392], 00:11:18.326 | 99.00th=[ 4359], 99.50th=[ 4817], 99.90th=[ 6325], 99.95th=[ 8717], 00:11:18.326 | 99.99th=[10683] 00:11:18.326 bw ( KiB/s): min=80760, max=88336, per=98.82%, avg=84637.33, stdev=3791.16, samples=3 00:11:18.326 iops : min=20190, max=22084, avg=21159.33, stdev=947.79, samples=3 00:11:18.326 write: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec); 0 zone resets 00:11:18.326 slat (usec): min=3, max=580, avg= 4.99, stdev= 4.13 00:11:18.326 clat (usec): min=218, max=10751, avg=2996.79, stdev=368.82 00:11:18.326 lat (usec): min=222, max=10765, avg=3001.78, stdev=369.17 00:11:18.326 clat percentiles (usec): 00:11:18.326 | 1.00th=[ 2540], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:11:18.326 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:11:18.326 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3425], 00:11:18.326 | 99.00th=[ 4424], 99.50th=[ 4883], 99.90th=[ 6980], 99.95th=[ 9110], 00:11:18.326 | 99.99th=[10552] 00:11:18.326 bw ( KiB/s): min=80704, max=88280, per=99.69%, avg=84733.33, stdev=3810.99, samples=3 00:11:18.326 iops : min=20176, max=22070, avg=21183.33, stdev=952.75, samples=3 00:11:18.326 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:18.326 lat (msec) : 2=0.24%, 4=98.06%, 10=1.62%, 20=0.03% 00:11:18.326 cpu : usr=98.60%, sys=0.35%, ctx=4, majf=0, minf=607 00:11:18.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:18.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.326 issued rwts: total=42847,42520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.326 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.326 00:11:18.326 Run status group 0 (all jobs): 00:11:18.326 READ: bw=83.6MiB/s (87.7MB/s), 83.6MiB/s-83.6MiB/s (87.7MB/s-87.7MB/s), io=167MiB (176MB), run=2001-2001msec 00:11:18.326 WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:18.326 ----------------------------------------------------- 00:11:18.326 Suppressions used: 00:11:18.326 count bytes template 00:11:18.326 1 32 /usr/src/fio/parse.c 00:11:18.326 1 8 libtcmalloc_minimal.so 00:11:18.326 ----------------------------------------------------- 00:11:18.326 00:11:18.326 10:23:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:18.326 10:23:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:18.326 10:23:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:18.326 10:23:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:18.585 10:23:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:18.585 10:23:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:18.849 10:23:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:18.849 10:23:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:18.849 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:19.109 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:19.109 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:19.109 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:19.109 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:19.109 10:23:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:19.109 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:19.109 fio-3.35 00:11:19.109 Starting 1 thread 00:11:24.379 00:11:24.379 test: (groupid=0, jobs=1): err= 0: pid=65658: Sat Dec 7 10:23:23 2024 00:11:24.379 read: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec) 00:11:24.379 slat (usec): min=3, max=512, avg= 4.61, stdev= 3.31 00:11:24.379 clat (usec): min=185, max=11487, avg=3000.43, stdev=418.91 00:11:24.379 lat (usec): min=189, max=11566, avg=3005.04, stdev=419.32 00:11:24.379 clat percentiles (usec): 00:11:24.379 | 1.00th=[ 2507], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:11:24.379 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:11:24.379 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3392], 00:11:24.379 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 6915], 99.95th=[ 8717], 00:11:24.379 | 99.99th=[11076] 00:11:24.379 bw ( KiB/s): min=79456, max=86096, per=98.24%, avg=83477.33, stdev=3535.25, samples=3 00:11:24.379 iops : min=19864, max=21524, avg=20869.33, stdev=883.81, samples=3 00:11:24.379 write: IOPS=21.1k, BW=82.4MiB/s (86.4MB/s)(165MiB/2001msec); 0 zone resets 00:11:24.379 slat (usec): min=3, max=538, avg= 4.92, stdev= 3.71 00:11:24.379 clat (usec): min=177, max=11106, avg=3017.92, stdev=427.96 00:11:24.379 lat (usec): min=181, max=11120, avg=3022.84, stdev=428.42 00:11:24.379 clat percentiles (usec): 00:11:24.379 | 1.00th=[ 2540], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:11:24.379 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:11:24.379 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3392], 00:11:24.379 | 99.00th=[ 5080], 99.50th=[ 6063], 99.90th=[ 7439], 99.95th=[ 8979], 00:11:24.379 | 99.99th=[10814] 00:11:24.379 bw ( KiB/s): min=79328, max=86248, per=99.05%, avg=83554.67, stdev=3706.07, samples=3 00:11:24.379 iops : min=19832, max=21562, avg=20888.67, stdev=926.52, samples=3 00:11:24.379 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:11:24.379 lat (msec) : 2=0.27%, 4=97.71%, 10=1.95%, 20=0.03% 00:11:24.379 cpu : usr=98.95%, sys=0.05%, ctx=22, majf=0, minf=605 00:11:24.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:24.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.379 issued rwts: total=42508,42199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.379 00:11:24.379 Run status group 0 (all jobs): 00:11:24.379 READ: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:24.379 WRITE: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s), io=165MiB (173MB), run=2001-2001msec 00:11:24.379 ----------------------------------------------------- 00:11:24.379 Suppressions used: 00:11:24.379 count bytes template 00:11:24.379 1 32 /usr/src/fio/parse.c 00:11:24.379 1 8 libtcmalloc_minimal.so 00:11:24.379 ----------------------------------------------------- 00:11:24.379 00:11:24.379 ************************************ 00:11:24.379 END TEST nvme_fio 00:11:24.379 ************************************ 00:11:24.379 10:23:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:24.379 10:23:23 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:24.379 00:11:24.379 real 0m20.211s 00:11:24.379 user 0m14.920s 00:11:24.379 sys 0m6.519s 00:11:24.379 10:23:23 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.379 10:23:23 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:24.379 ************************************ 00:11:24.379 END TEST nvme 00:11:24.379 ************************************ 00:11:24.379 00:11:24.379 real 1m35.577s 00:11:24.379 user 3m43.135s 00:11:24.379 sys 0m26.189s 00:11:24.379 10:23:23 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.379 10:23:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:24.379 10:23:23 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:24.379 10:23:23 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:24.379 10:23:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.379 10:23:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.379 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.379 ************************************ 00:11:24.379 START TEST nvme_scc 00:11:24.379 ************************************ 00:11:24.379 10:23:23 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:24.379 * Looking for test storage... 00:11:24.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.379 10:23:23 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.379 10:23:23 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.379 10:23:23 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.639 10:23:23 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:24.639 10:23:23 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.639 10:23:23 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.639 --rc genhtml_branch_coverage=1 00:11:24.639 --rc genhtml_function_coverage=1 00:11:24.639 --rc genhtml_legend=1 00:11:24.639 --rc geninfo_all_blocks=1 00:11:24.639 --rc geninfo_unexecuted_blocks=1 00:11:24.639 00:11:24.639 ' 00:11:24.639 10:23:23 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.639 --rc genhtml_branch_coverage=1 00:11:24.639 --rc genhtml_function_coverage=1 00:11:24.639 --rc genhtml_legend=1 00:11:24.639 --rc geninfo_all_blocks=1 00:11:24.639 --rc geninfo_unexecuted_blocks=1 00:11:24.639 00:11:24.639 ' 00:11:24.639 10:23:23 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.639 --rc genhtml_branch_coverage=1 00:11:24.639 --rc genhtml_function_coverage=1 00:11:24.639 --rc genhtml_legend=1 00:11:24.639 --rc geninfo_all_blocks=1 00:11:24.639 --rc geninfo_unexecuted_blocks=1 00:11:24.639 00:11:24.639 ' 00:11:24.639 10:23:23 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.639 --rc genhtml_branch_coverage=1 00:11:24.639 --rc genhtml_function_coverage=1 00:11:24.639 --rc genhtml_legend=1 00:11:24.639 --rc geninfo_all_blocks=1 00:11:24.639 --rc geninfo_unexecuted_blocks=1 00:11:24.639 00:11:24.639 ' 00:11:24.639 10:23:23 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.639 10:23:23 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.639 10:23:23 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.639 10:23:23 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.639 10:23:23 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.639 10:23:23 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:24.639 10:23:23 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:24.639 10:23:23 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:24.639 10:23:23 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.639 10:23:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:24.639 10:23:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:24.639 10:23:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:24.639 10:23:23 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:25.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.469 Waiting for block devices as requested 00:11:25.469 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.727 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.986 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.277 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:31.277 10:23:30 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:31.277 10:23:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:31.277 10:23:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:31.277 10:23:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.277 10:23:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:31.278 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:31.281 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:31.282 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:31.283 10:23:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:31.283 10:23:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:31.283 10:23:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.283 10:23:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:31.283 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:31.287 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.288 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:31.289 10:23:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:31.289 10:23:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:31.289 10:23:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.289 10:23:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.289 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:31.290 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.291 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:31.292 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.293 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.294 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:31.295 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.296 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.297 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:31.561 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.562 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.563 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.564 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.565 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:31.566 10:23:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:31.566 10:23:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:31.566 10:23:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.566 10:23:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:31.566 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.567 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:31.568 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:31.569 10:23:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:31.569 10:23:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:31.569 10:23:30 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.078 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.078 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.078 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.078 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.337 10:23:32 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:33.337 10:23:32 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.337 10:23:32 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.337 10:23:32 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:33.337 ************************************ 00:11:33.337 START TEST nvme_simple_copy 00:11:33.337 ************************************ 00:11:33.338 10:23:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:33.597 Initializing NVMe Controllers 00:11:33.597 Attaching to 0000:00:10.0 00:11:33.597 Controller supports SCC. Attached to 0000:00:10.0 00:11:33.597 Namespace ID: 1 size: 6GB 00:11:33.597 Initialization complete. 00:11:33.597 00:11:33.597 Controller QEMU NVMe Ctrl (12340 ) 00:11:33.597 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:33.597 Namespace Block Size:4096 00:11:33.597 Writing LBAs 0 to 63 with Random Data 00:11:33.597 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:33.597 LBAs matching Written Data: 64 00:11:33.597 00:11:33.597 real 0m0.315s 00:11:33.597 user 0m0.118s 00:11:33.597 sys 0m0.095s 00:11:33.597 10:23:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.597 10:23:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:33.597 ************************************ 00:11:33.597 END TEST nvme_simple_copy 00:11:33.597 ************************************ 00:11:33.597 00:11:33.597 real 0m9.363s 00:11:33.597 user 0m1.682s 00:11:33.597 sys 0m2.702s 00:11:33.597 10:23:32 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.597 10:23:32 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:33.597 ************************************ 00:11:33.597 END TEST nvme_scc 00:11:33.597 ************************************ 00:11:33.857 10:23:32 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:33.857 10:23:32 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:33.857 10:23:32 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:33.857 10:23:32 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:33.857 10:23:32 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:33.857 10:23:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.857 10:23:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.857 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:11:33.857 ************************************ 00:11:33.857 START TEST nvme_fdp 00:11:33.857 ************************************ 00:11:33.857 10:23:32 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:33.857 * Looking for test storage... 00:11:33.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.857 10:23:33 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.857 10:23:33 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.857 10:23:33 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.857 10:23:33 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.857 10:23:33 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.117 10:23:33 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:34.117 10:23:33 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.117 10:23:33 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.117 --rc genhtml_branch_coverage=1 00:11:34.117 --rc genhtml_function_coverage=1 00:11:34.117 --rc genhtml_legend=1 00:11:34.117 --rc geninfo_all_blocks=1 00:11:34.117 --rc geninfo_unexecuted_blocks=1 00:11:34.117 00:11:34.117 ' 00:11:34.117 10:23:33 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.117 --rc genhtml_branch_coverage=1 00:11:34.117 --rc genhtml_function_coverage=1 00:11:34.117 --rc genhtml_legend=1 00:11:34.117 --rc geninfo_all_blocks=1 00:11:34.117 --rc geninfo_unexecuted_blocks=1 00:11:34.117 00:11:34.117 ' 00:11:34.117 10:23:33 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.117 --rc genhtml_branch_coverage=1 00:11:34.117 --rc genhtml_function_coverage=1 00:11:34.117 --rc genhtml_legend=1 00:11:34.117 --rc geninfo_all_blocks=1 00:11:34.117 --rc geninfo_unexecuted_blocks=1 00:11:34.117 00:11:34.118 ' 00:11:34.118 10:23:33 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.118 --rc genhtml_branch_coverage=1 00:11:34.118 --rc genhtml_function_coverage=1 00:11:34.118 --rc genhtml_legend=1 00:11:34.118 --rc geninfo_all_blocks=1 00:11:34.118 --rc geninfo_unexecuted_blocks=1 00:11:34.118 00:11:34.118 ' 00:11:34.118 10:23:33 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.118 10:23:33 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.118 10:23:33 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.118 10:23:33 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.118 10:23:33 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.118 10:23:33 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.118 10:23:33 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.118 10:23:33 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.118 10:23:33 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:34.118 10:23:33 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:34.118 10:23:33 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:34.118 10:23:33 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.118 10:23:33 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.948 Waiting for block devices as requested 00:11:34.948 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.208 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.208 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.208 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.569 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:40.569 10:23:39 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:40.569 10:23:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:40.569 10:23:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:40.569 10:23:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:40.569 10:23:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.569 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.570 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:40.571 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.572 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:40.573 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.574 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:40.575 10:23:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:40.575 10:23:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:40.575 10:23:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:40.575 10:23:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:40.575 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:40.576 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.577 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:40.578 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.579 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:40.580 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.581 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:40.582 10:23:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:40.582 10:23:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:40.582 10:23:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:40.582 10:23:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:40.582 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.583 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:40.584 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.585 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:40.586 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.587 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:40.853 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.854 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.855 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:40.856 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.857 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:40.858 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.859 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:40.860 10:23:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:40.860 10:23:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:40.860 10:23:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:40.860 10:23:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.860 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:40.861 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.862 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.863 10:23:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:40.864 10:23:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:40.864 10:23:40 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:40.864 10:23:40 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:41.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:42.368 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.368 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.368 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.626 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.626 10:23:41 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:42.626 10:23:41 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.626 10:23:41 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.626 10:23:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:42.626 ************************************ 00:11:42.626 START TEST nvme_flexible_data_placement 00:11:42.626 ************************************ 00:11:42.626 10:23:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:42.885 Initializing NVMe Controllers 00:11:42.885 Attaching to 0000:00:13.0 00:11:42.885 Controller supports FDP Attached to 0000:00:13.0 00:11:42.885 Namespace ID: 1 Endurance Group ID: 1 00:11:42.885 Initialization complete. 00:11:42.885 00:11:42.885 ================================== 00:11:42.885 == FDP tests for Namespace: #01 == 00:11:42.885 ================================== 00:11:42.885 00:11:42.885 Get Feature: FDP: 00:11:42.885 ================= 00:11:42.885 Enabled: Yes 00:11:42.885 FDP configuration Index: 0 00:11:42.885 00:11:42.885 FDP configurations log page 00:11:42.885 =========================== 00:11:42.885 Number of FDP configurations: 1 00:11:42.885 Version: 0 00:11:42.885 Size: 112 00:11:42.885 FDP Configuration Descriptor: 0 00:11:42.885 Descriptor Size: 96 00:11:42.885 Reclaim Group Identifier format: 2 00:11:42.885 FDP Volatile Write Cache: Not Present 00:11:42.885 FDP Configuration: Valid 00:11:42.885 Vendor Specific Size: 0 00:11:42.885 Number of Reclaim Groups: 2 00:11:42.885 Number of Recalim Unit Handles: 8 00:11:42.885 Max Placement Identifiers: 128 00:11:42.885 Number of Namespaces Suppprted: 256 00:11:42.885 Reclaim unit Nominal Size: 6000000 bytes 00:11:42.885 Estimated Reclaim Unit Time Limit: Not Reported 00:11:42.885 RUH Desc #000: RUH Type: Initially Isolated 00:11:42.885 RUH Desc #001: RUH Type: Initially Isolated 00:11:42.885 RUH Desc #002: RUH Type: Initially Isolated 00:11:42.886 RUH Desc #003: RUH Type: Initially Isolated 00:11:42.886 RUH Desc #004: RUH Type: Initially Isolated 00:11:42.886 RUH Desc #005: RUH Type: Initially Isolated 00:11:42.886 RUH Desc #006: RUH Type: Initially Isolated 00:11:42.886 RUH Desc #007: RUH Type: Initially Isolated 00:11:42.886 00:11:42.886 FDP reclaim unit handle usage log page 00:11:42.886 ====================================== 00:11:42.886 Number of Reclaim Unit Handles: 8 00:11:42.886 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:42.886 RUH Usage Desc #001: RUH Attributes: Unused 00:11:42.886 RUH Usage Desc #002: RUH Attributes: Unused 00:11:42.886 RUH Usage Desc #003: RUH Attributes: Unused 00:11:42.886 RUH Usage Desc #004: RUH Attributes: Unused 00:11:42.886 RUH Usage Desc #005: RUH Attributes: Unused 00:11:42.886 RUH Usage Desc #006: RUH Attributes: Unused 00:11:42.886 RUH Usage Desc #007: RUH Attributes: Unused 00:11:42.886 00:11:42.886 FDP statistics log page 00:11:42.886 ======================= 00:11:42.886 Host bytes with metadata written: 1079517184 00:11:42.886 Media bytes with metadata written: 1079767040 00:11:42.886 Media bytes erased: 0 00:11:42.886 00:11:42.886 FDP Reclaim unit handle status 00:11:42.886 ============================== 00:11:42.886 Number of RUHS descriptors: 2 00:11:42.886 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001a7e 00:11:42.886 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:42.886 00:11:42.886 FDP write on placement id: 0 success 00:11:42.886 00:11:42.886 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:42.886 00:11:42.886 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:42.886 00:11:42.886 Get Feature: FDP Events for Placement handle: #0 00:11:42.886 ======================== 00:11:42.886 Number of FDP Events: 6 00:11:42.886 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:42.886 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:42.886 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:42.886 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:42.886 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:42.886 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:42.886 00:11:42.886 FDP events log page 00:11:42.886 =================== 00:11:42.886 Number of FDP events: 1 00:11:42.886 FDP Event #0: 00:11:42.886 Event Type: RU Not Written to Capacity 00:11:42.886 Placement Identifier: Valid 00:11:42.886 NSID: Valid 00:11:42.886 Location: Valid 00:11:42.886 Placement Identifier: 0 00:11:42.886 Event Timestamp: 9 00:11:42.886 Namespace Identifier: 1 00:11:42.886 Reclaim Group Identifier: 0 00:11:42.886 Reclaim Unit Handle Identifier: 0 00:11:42.886 00:11:42.886 FDP test passed 00:11:42.886 00:11:42.886 real 0m0.289s 00:11:42.886 user 0m0.099s 00:11:42.886 sys 0m0.089s 00:11:42.886 10:23:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.886 10:23:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:42.886 ************************************ 00:11:42.886 END TEST nvme_flexible_data_placement 00:11:42.886 ************************************ 00:11:42.886 00:11:42.886 real 0m9.225s 00:11:42.886 user 0m1.662s 00:11:42.886 sys 0m2.692s 00:11:42.886 10:23:42 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.886 10:23:42 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:42.886 ************************************ 00:11:42.886 END TEST nvme_fdp 00:11:42.886 ************************************ 00:11:43.144 10:23:42 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:43.145 10:23:42 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:43.145 10:23:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.145 10:23:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.145 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:11:43.145 ************************************ 00:11:43.145 START TEST nvme_rpc 00:11:43.145 ************************************ 00:11:43.145 10:23:42 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:43.145 * Looking for test storage... 00:11:43.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:43.145 10:23:42 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.145 10:23:42 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.145 10:23:42 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.404 10:23:42 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.404 --rc genhtml_branch_coverage=1 00:11:43.404 --rc genhtml_function_coverage=1 00:11:43.404 --rc genhtml_legend=1 00:11:43.404 --rc geninfo_all_blocks=1 00:11:43.404 --rc geninfo_unexecuted_blocks=1 00:11:43.404 00:11:43.404 ' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.404 --rc genhtml_branch_coverage=1 00:11:43.404 --rc genhtml_function_coverage=1 00:11:43.404 --rc genhtml_legend=1 00:11:43.404 --rc geninfo_all_blocks=1 00:11:43.404 --rc geninfo_unexecuted_blocks=1 00:11:43.404 00:11:43.404 ' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.404 --rc genhtml_branch_coverage=1 00:11:43.404 --rc genhtml_function_coverage=1 00:11:43.404 --rc genhtml_legend=1 00:11:43.404 --rc geninfo_all_blocks=1 00:11:43.404 --rc geninfo_unexecuted_blocks=1 00:11:43.404 00:11:43.404 ' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.404 --rc genhtml_branch_coverage=1 00:11:43.404 --rc genhtml_function_coverage=1 00:11:43.404 --rc genhtml_legend=1 00:11:43.404 --rc geninfo_all_blocks=1 00:11:43.404 --rc geninfo_unexecuted_blocks=1 00:11:43.404 00:11:43.404 ' 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67064 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:43.404 10:23:42 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67064 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67064 ']' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.404 10:23:42 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.663 [2024-12-07 10:23:42.782427] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:43.663 [2024-12-07 10:23:42.782563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67064 ] 00:11:43.663 [2024-12-07 10:23:42.962913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.921 [2024-12-07 10:23:43.096611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.921 [2024-12-07 10:23:43.096641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.856 10:23:44 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.856 10:23:44 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:44.856 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:45.116 Nvme0n1 00:11:45.116 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:45.116 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:45.376 request: 00:11:45.376 { 00:11:45.376 "bdev_name": "Nvme0n1", 00:11:45.376 "filename": "non_existing_file", 00:11:45.376 "method": "bdev_nvme_apply_firmware", 00:11:45.376 "req_id": 1 00:11:45.376 } 00:11:45.376 Got JSON-RPC error response 00:11:45.376 response: 00:11:45.376 { 00:11:45.376 "code": -32603, 00:11:45.376 "message": "open file failed." 00:11:45.376 } 00:11:45.376 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:45.376 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:45.376 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:45.376 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:45.376 10:23:44 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67064 00:11:45.376 10:23:44 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67064 ']' 00:11:45.376 10:23:44 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67064 00:11:45.376 10:23:44 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:45.376 10:23:44 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.376 10:23:44 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67064 00:11:45.636 10:23:44 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.636 killing process with pid 67064 00:11:45.636 10:23:44 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.636 10:23:44 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67064' 00:11:45.636 10:23:44 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67064 00:11:45.636 10:23:44 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67064 00:11:48.173 00:11:48.173 real 0m4.824s 00:11:48.173 user 0m8.577s 00:11:48.173 sys 0m0.940s 00:11:48.173 10:23:47 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.173 10:23:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.173 ************************************ 00:11:48.173 END TEST nvme_rpc 00:11:48.173 ************************************ 00:11:48.173 10:23:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:48.173 10:23:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.173 10:23:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.173 10:23:47 -- common/autotest_common.sh@10 -- # set +x 00:11:48.173 ************************************ 00:11:48.173 START TEST nvme_rpc_timeouts 00:11:48.173 ************************************ 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:48.173 * Looking for test storage... 00:11:48.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.173 10:23:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.173 --rc genhtml_branch_coverage=1 00:11:48.173 --rc genhtml_function_coverage=1 00:11:48.173 --rc genhtml_legend=1 00:11:48.173 --rc geninfo_all_blocks=1 00:11:48.173 --rc geninfo_unexecuted_blocks=1 00:11:48.173 00:11:48.173 ' 00:11:48.173 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.173 --rc genhtml_branch_coverage=1 00:11:48.173 --rc genhtml_function_coverage=1 00:11:48.174 --rc genhtml_legend=1 00:11:48.174 --rc geninfo_all_blocks=1 00:11:48.174 --rc geninfo_unexecuted_blocks=1 00:11:48.174 00:11:48.174 ' 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.174 --rc genhtml_branch_coverage=1 00:11:48.174 --rc genhtml_function_coverage=1 00:11:48.174 --rc genhtml_legend=1 00:11:48.174 --rc geninfo_all_blocks=1 00:11:48.174 --rc geninfo_unexecuted_blocks=1 00:11:48.174 00:11:48.174 ' 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.174 --rc genhtml_branch_coverage=1 00:11:48.174 --rc genhtml_function_coverage=1 00:11:48.174 --rc genhtml_legend=1 00:11:48.174 --rc geninfo_all_blocks=1 00:11:48.174 --rc geninfo_unexecuted_blocks=1 00:11:48.174 00:11:48.174 ' 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67152 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67152 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67185 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:48.174 10:23:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67185 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67185 ']' 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.174 10:23:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:48.434 [2024-12-07 10:23:47.555953] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:48.434 [2024-12-07 10:23:47.556099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67185 ] 00:11:48.434 [2024-12-07 10:23:47.736892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:48.693 [2024-12-07 10:23:47.870122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.693 [2024-12-07 10:23:47.870153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.632 10:23:48 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.632 10:23:48 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:49.632 Checking default timeout settings: 00:11:49.632 10:23:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:49.632 10:23:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:49.891 Making settings changes with rpc: 00:11:49.891 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:49.891 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:50.151 Check default vs. modified settings: 00:11:50.151 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:50.151 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67152 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67152 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:50.410 Setting action_on_timeout is changed as expected. 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67152 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:50.410 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67152 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:50.411 Setting timeout_us is changed as expected. 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67152 00:11:50.411 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67152 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:50.670 Setting timeout_admin_us is changed as expected. 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67152 /tmp/settings_modified_67152 00:11:50.670 10:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67185 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67185 ']' 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67185 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67185 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.670 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.671 killing process with pid 67185 00:11:50.671 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67185' 00:11:50.671 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67185 00:11:50.671 10:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67185 00:11:53.206 RPC TIMEOUT SETTING TEST PASSED. 00:11:53.206 10:23:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:53.206 00:11:53.206 real 0m5.111s 00:11:53.206 user 0m9.403s 00:11:53.206 sys 0m0.966s 00:11:53.206 10:23:52 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.206 10:23:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:53.206 ************************************ 00:11:53.206 END TEST nvme_rpc_timeouts 00:11:53.206 ************************************ 00:11:53.206 10:23:52 -- spdk/autotest.sh@239 -- # uname -s 00:11:53.206 10:23:52 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:53.207 10:23:52 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:53.207 10:23:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.207 10:23:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.207 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:11:53.207 ************************************ 00:11:53.207 START TEST sw_hotplug 00:11:53.207 ************************************ 00:11:53.207 10:23:52 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:53.207 * Looking for test storage... 00:11:53.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:53.207 10:23:52 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:53.207 10:23:52 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:11:53.207 10:23:52 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:53.466 10:23:52 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.466 10:23:52 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:53.466 10:23:52 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.466 10:23:52 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.466 --rc genhtml_branch_coverage=1 00:11:53.466 --rc genhtml_function_coverage=1 00:11:53.466 --rc genhtml_legend=1 00:11:53.466 --rc geninfo_all_blocks=1 00:11:53.466 --rc geninfo_unexecuted_blocks=1 00:11:53.466 00:11:53.466 ' 00:11:53.466 10:23:52 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.466 --rc genhtml_branch_coverage=1 00:11:53.466 --rc genhtml_function_coverage=1 00:11:53.466 --rc genhtml_legend=1 00:11:53.466 --rc geninfo_all_blocks=1 00:11:53.467 --rc geninfo_unexecuted_blocks=1 00:11:53.467 00:11:53.467 ' 00:11:53.467 10:23:52 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:53.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.467 --rc genhtml_branch_coverage=1 00:11:53.467 --rc genhtml_function_coverage=1 00:11:53.467 --rc genhtml_legend=1 00:11:53.467 --rc geninfo_all_blocks=1 00:11:53.467 --rc geninfo_unexecuted_blocks=1 00:11:53.467 00:11:53.467 ' 00:11:53.467 10:23:52 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:53.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.467 --rc genhtml_branch_coverage=1 00:11:53.467 --rc genhtml_function_coverage=1 00:11:53.467 --rc genhtml_legend=1 00:11:53.467 --rc geninfo_all_blocks=1 00:11:53.467 --rc geninfo_unexecuted_blocks=1 00:11:53.467 00:11:53.467 ' 00:11:53.467 10:23:52 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:54.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:54.296 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.296 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.296 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.296 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.296 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:54.296 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:54.296 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:54.296 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:54.296 10:23:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:54.297 10:23:53 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:54.297 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:54.297 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:54.297 10:23:53 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:54.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.437 Waiting for block devices as requested 00:11:55.437 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.437 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.437 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.697 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:00.975 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:00.975 10:23:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:00.975 10:23:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:01.542 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:01.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:01.542 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:01.800 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:02.368 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:02.368 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:02.368 10:24:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68075 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:02.368 10:24:01 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:02.368 10:24:01 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:02.368 10:24:01 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:02.368 10:24:01 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:02.368 10:24:01 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:02.368 10:24:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:02.628 Initializing NVMe Controllers 00:12:02.628 Attaching to 0000:00:10.0 00:12:02.628 Attaching to 0000:00:11.0 00:12:02.628 Attached to 0000:00:11.0 00:12:02.628 Attached to 0000:00:10.0 00:12:02.628 Initialization complete. Starting I/O... 00:12:02.628 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:02.628 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:02.628 00:12:04.008 QEMU NVMe Ctrl (12341 ): 1508 I/Os completed (+1508) 00:12:04.008 QEMU NVMe Ctrl (12340 ): 1508 I/Os completed (+1508) 00:12:04.008 00:12:04.945 QEMU NVMe Ctrl (12341 ): 3616 I/Os completed (+2108) 00:12:04.945 QEMU NVMe Ctrl (12340 ): 3617 I/Os completed (+2109) 00:12:04.945 00:12:05.882 QEMU NVMe Ctrl (12341 ): 5760 I/Os completed (+2144) 00:12:05.882 QEMU NVMe Ctrl (12340 ): 5767 I/Os completed (+2150) 00:12:05.882 00:12:06.820 QEMU NVMe Ctrl (12341 ): 7936 I/Os completed (+2176) 00:12:06.820 QEMU NVMe Ctrl (12340 ): 7947 I/Os completed (+2180) 00:12:06.820 00:12:07.760 QEMU NVMe Ctrl (12341 ): 10124 I/Os completed (+2188) 00:12:07.760 QEMU NVMe Ctrl (12340 ): 10136 I/Os completed (+2189) 00:12:07.760 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.699 [2024-12-07 10:24:07.710957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:08.699 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:08.699 [2024-12-07 10:24:07.712970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.713058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.713082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.713108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:08.699 [2024-12-07 10:24:07.715944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.716016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.716036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.716060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.699 [2024-12-07 10:24:07.752629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:08.699 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:08.699 [2024-12-07 10:24:07.754304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.754352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.754383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.754404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:08.699 [2024-12-07 10:24:07.757107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.757151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.757173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 [2024-12-07 10:24:07.757194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:08.699 00:12:08.699 10:24:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:08.699 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.699 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.699 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.699 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:08.699 Attaching to 0000:00:10.0 00:12:08.699 Attached to 0000:00:10.0 00:12:08.960 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:08.960 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.960 10:24:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.960 Attaching to 0000:00:11.0 00:12:08.960 Attached to 0000:00:11.0 00:12:09.897 QEMU NVMe Ctrl (12340 ): 2016 I/Os completed (+2016) 00:12:09.897 QEMU NVMe Ctrl (12341 ): 1748 I/Os completed (+1748) 00:12:09.897 00:12:10.834 QEMU NVMe Ctrl (12340 ): 4244 I/Os completed (+2228) 00:12:10.834 QEMU NVMe Ctrl (12341 ): 3976 I/Os completed (+2228) 00:12:10.834 00:12:11.853 QEMU NVMe Ctrl (12340 ): 6368 I/Os completed (+2124) 00:12:11.853 QEMU NVMe Ctrl (12341 ): 6107 I/Os completed (+2131) 00:12:11.853 00:12:12.790 QEMU NVMe Ctrl (12340 ): 8556 I/Os completed (+2188) 00:12:12.790 QEMU NVMe Ctrl (12341 ): 8295 I/Os completed (+2188) 00:12:12.790 00:12:13.729 QEMU NVMe Ctrl (12340 ): 10768 I/Os completed (+2212) 00:12:13.729 QEMU NVMe Ctrl (12341 ): 10507 I/Os completed (+2212) 00:12:13.729 00:12:14.667 QEMU NVMe Ctrl (12340 ): 12984 I/Os completed (+2216) 00:12:14.667 QEMU NVMe Ctrl (12341 ): 12723 I/Os completed (+2216) 00:12:14.667 00:12:15.605 QEMU NVMe Ctrl (12340 ): 15188 I/Os completed (+2204) 00:12:15.605 QEMU NVMe Ctrl (12341 ): 14927 I/Os completed (+2204) 00:12:15.605 00:12:16.982 QEMU NVMe Ctrl (12340 ): 17396 I/Os completed (+2208) 00:12:16.982 QEMU NVMe Ctrl (12341 ): 17135 I/Os completed (+2208) 00:12:16.982 00:12:17.920 QEMU NVMe Ctrl (12340 ): 19600 I/Os completed (+2204) 00:12:17.920 QEMU NVMe Ctrl (12341 ): 19339 I/Os completed (+2204) 00:12:17.920 00:12:18.858 QEMU NVMe Ctrl (12340 ): 21812 I/Os completed (+2212) 00:12:18.858 QEMU NVMe Ctrl (12341 ): 21551 I/Os completed (+2212) 00:12:18.858 00:12:19.796 QEMU NVMe Ctrl (12340 ): 24020 I/Os completed (+2208) 00:12:19.796 QEMU NVMe Ctrl (12341 ): 23759 I/Os completed (+2208) 00:12:19.796 00:12:20.735 QEMU NVMe Ctrl (12340 ): 26224 I/Os completed (+2204) 00:12:20.735 QEMU NVMe Ctrl (12341 ): 25963 I/Os completed (+2204) 00:12:20.735 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.995 [2024-12-07 10:24:20.144148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:20.995 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:20.995 [2024-12-07 10:24:20.145833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.145895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.145917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.145941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:20.995 [2024-12-07 10:24:20.148849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.148904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.148922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.148942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.995 [2024-12-07 10:24:20.181915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:20.995 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:20.995 [2024-12-07 10:24:20.183470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.183516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.183542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.183561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:20.995 [2024-12-07 10:24:20.186062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.186104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.186126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 [2024-12-07 10:24:20.186145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.995 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:21.253 Attaching to 0000:00:10.0 00:12:21.253 Attached to 0000:00:10.0 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:21.253 10:24:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:21.253 Attaching to 0000:00:11.0 00:12:21.253 Attached to 0000:00:11.0 00:12:21.821 QEMU NVMe Ctrl (12340 ): 1128 I/Os completed (+1128) 00:12:21.821 QEMU NVMe Ctrl (12341 ): 896 I/Os completed (+896) 00:12:21.821 00:12:22.757 QEMU NVMe Ctrl (12340 ): 3320 I/Os completed (+2192) 00:12:22.757 QEMU NVMe Ctrl (12341 ): 3088 I/Os completed (+2192) 00:12:22.757 00:12:23.693 QEMU NVMe Ctrl (12340 ): 5520 I/Os completed (+2200) 00:12:23.693 QEMU NVMe Ctrl (12341 ): 5288 I/Os completed (+2200) 00:12:23.693 00:12:24.631 QEMU NVMe Ctrl (12340 ): 7716 I/Os completed (+2196) 00:12:24.631 QEMU NVMe Ctrl (12341 ): 7484 I/Os completed (+2196) 00:12:24.631 00:12:25.564 QEMU NVMe Ctrl (12340 ): 9916 I/Os completed (+2200) 00:12:25.564 QEMU NVMe Ctrl (12341 ): 9684 I/Os completed (+2200) 00:12:25.564 00:12:26.955 QEMU NVMe Ctrl (12340 ): 12120 I/Os completed (+2204) 00:12:26.955 QEMU NVMe Ctrl (12341 ): 11888 I/Os completed (+2204) 00:12:26.955 00:12:27.892 QEMU NVMe Ctrl (12340 ): 14324 I/Os completed (+2204) 00:12:27.892 QEMU NVMe Ctrl (12341 ): 14092 I/Os completed (+2204) 00:12:27.892 00:12:28.830 QEMU NVMe Ctrl (12340 ): 16524 I/Os completed (+2200) 00:12:28.830 QEMU NVMe Ctrl (12341 ): 16292 I/Os completed (+2200) 00:12:28.830 00:12:29.768 QEMU NVMe Ctrl (12340 ): 18736 I/Os completed (+2212) 00:12:29.768 QEMU NVMe Ctrl (12341 ): 18504 I/Os completed (+2212) 00:12:29.768 00:12:30.707 QEMU NVMe Ctrl (12340 ): 20940 I/Os completed (+2204) 00:12:30.707 QEMU NVMe Ctrl (12341 ): 20708 I/Os completed (+2204) 00:12:30.707 00:12:31.645 QEMU NVMe Ctrl (12340 ): 23133 I/Os completed (+2193) 00:12:31.645 QEMU NVMe Ctrl (12341 ): 22900 I/Os completed (+2192) 00:12:31.645 00:12:32.584 QEMU NVMe Ctrl (12340 ): 25337 I/Os completed (+2204) 00:12:32.584 QEMU NVMe Ctrl (12341 ): 25104 I/Os completed (+2204) 00:12:32.584 00:12:33.153 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:33.153 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:33.153 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:33.153 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:33.153 [2024-12-07 10:24:32.502544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:33.153 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:33.153 [2024-12-07 10:24:32.504257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.153 [2024-12-07 10:24:32.504314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.153 [2024-12-07 10:24:32.504335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.153 [2024-12-07 10:24:32.504357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.413 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:33.414 [2024-12-07 10:24:32.507169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.507221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.507240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.507259] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:33.414 [2024-12-07 10:24:32.545922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:33.414 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:33.414 [2024-12-07 10:24:32.547489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.547537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.547561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.547580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:33.414 [2024-12-07 10:24:32.550123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.550165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.550189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 [2024-12-07 10:24:32.550206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.414 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:33.414 EAL: Scan for (pci) bus failed. 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.414 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:33.414 Attaching to 0000:00:10.0 00:12:33.414 Attached to 0000:00:10.0 00:12:33.673 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:33.673 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.673 10:24:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:33.673 Attaching to 0000:00:11.0 00:12:33.673 Attached to 0000:00:11.0 00:12:33.673 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:33.673 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:33.673 [2024-12-07 10:24:32.861900] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:45.892 10:24:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:45.892 10:24:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:45.892 10:24:44 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.15 00:12:45.892 10:24:44 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.15 00:12:45.892 10:24:44 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:45.892 10:24:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.15 00:12:45.892 10:24:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.15 2 00:12:45.892 remove_attach_helper took 43.15s to complete (handling 2 nvme drive(s)) 10:24:44 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68075 00:12:52.465 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68075) - No such process 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68075 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68616 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:52.465 10:24:50 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68616 00:12:52.465 10:24:50 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68616 ']' 00:12:52.465 10:24:50 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.465 10:24:50 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.465 10:24:50 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.465 10:24:50 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.465 10:24:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.465 [2024-12-07 10:24:50.974383] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:52.465 [2024-12-07 10:24:50.974650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68616 ] 00:12:52.465 [2024-12-07 10:24:51.153963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.466 [2024-12-07 10:24:51.258863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:53.034 10:24:52 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:53.034 10:24:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.687 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.687 10:24:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.687 10:24:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.687 [2024-12-07 10:24:58.187603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:59.687 [2024-12-07 10:24:58.189896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.687 [2024-12-07 10:24:58.189938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.687 [2024-12-07 10:24:58.189956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.687 [2024-12-07 10:24:58.189998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.190011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 [2024-12-07 10:24:58.190042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.190057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.190069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 [2024-12-07 10:24:58.190088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.190099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.688 10:24:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:59.688 [2024-12-07 10:24:58.586937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:59.688 [2024-12-07 10:24:58.589252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.589431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.589460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 [2024-12-07 10:24:58.589485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.589500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.589512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 [2024-12-07 10:24:58.589528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.589540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.589554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 [2024-12-07 10:24:58.589575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.688 [2024-12-07 10:24:58.589589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.688 [2024-12-07 10:24:58.589602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.688 10:24:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.688 10:24:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 10:24:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:59.688 10:24:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:59.947 10:24:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:59.947 10:24:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:59.947 10:24:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.157 10:25:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.157 10:25:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.157 10:25:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:12.157 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:12.158 [2024-12-07 10:25:11.166699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:12.158 [2024-12-07 10:25:11.170365] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.158 [2024-12-07 10:25:11.170511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.158 [2024-12-07 10:25:11.170657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.158 [2024-12-07 10:25:11.170728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.158 [2024-12-07 10:25:11.170762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.158 [2024-12-07 10:25:11.170864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.158 [2024-12-07 10:25:11.170923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.158 [2024-12-07 10:25:11.170958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.158 [2024-12-07 10:25:11.171075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.158 [2024-12-07 10:25:11.171138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.158 [2024-12-07 10:25:11.171171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.158 [2024-12-07 10:25:11.171223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.158 10:25:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.158 10:25:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.158 10:25:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:12.158 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.724 10:25:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.724 10:25:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.724 10:25:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:12.724 10:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:12.724 [2024-12-07 10:25:11.965405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:12.724 [2024-12-07 10:25:11.967820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.724 [2024-12-07 10:25:11.967860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.724 [2024-12-07 10:25:11.967898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.724 [2024-12-07 10:25:11.967922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.724 [2024-12-07 10:25:11.967937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.724 [2024-12-07 10:25:11.967949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.724 [2024-12-07 10:25:11.967965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.724 [2024-12-07 10:25:11.967975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.724 [2024-12-07 10:25:11.968004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.724 [2024-12-07 10:25:11.968018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.724 [2024-12-07 10:25:11.968031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.724 [2024-12-07 10:25:11.968043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.982 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:12.982 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.982 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.982 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.982 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.982 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.982 10:25:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.982 10:25:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.240 10:25:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:13.240 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:13.499 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:13.499 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.499 10:25:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.713 10:25:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.713 10:25:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.713 10:25:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.713 10:25:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.713 10:25:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.713 [2024-12-07 10:25:24.844735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:25.713 [2024-12-07 10:25:24.847540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.713 [2024-12-07 10:25:24.847601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.713 [2024-12-07 10:25:24.847622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.713 [2024-12-07 10:25:24.847662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.713 [2024-12-07 10:25:24.847676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.713 [2024-12-07 10:25:24.847702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.713 [2024-12-07 10:25:24.847719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.713 [2024-12-07 10:25:24.847739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.713 [2024-12-07 10:25:24.847753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.713 [2024-12-07 10:25:24.847775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.713 [2024-12-07 10:25:24.847788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.713 [2024-12-07 10:25:24.847807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.713 10:25:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:25.713 10:25:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:26.280 [2024-12-07 10:25:25.343924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:26.280 [2024-12-07 10:25:25.346370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.280 [2024-12-07 10:25:25.346420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.280 [2024-12-07 10:25:25.346452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.280 [2024-12-07 10:25:25.346476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.280 [2024-12-07 10:25:25.346492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.280 [2024-12-07 10:25:25.346506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.280 [2024-12-07 10:25:25.346524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.280 [2024-12-07 10:25:25.346538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.280 [2024-12-07 10:25:25.346556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.280 [2024-12-07 10:25:25.346571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.280 [2024-12-07 10:25:25.346587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.280 [2024-12-07 10:25:25.346600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.280 10:25:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.280 10:25:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.280 10:25:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:26.280 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:26.539 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:26.539 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:26.539 10:25:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.66 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.66 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.66 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.66 2 00:13:38.755 remove_attach_helper took 45.66s to complete (handling 2 nvme drive(s)) 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:38.755 10:25:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:38.755 10:25:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:45.320 10:25:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.320 10:25:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.320 [2024-12-07 10:25:43.884478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:45.320 [2024-12-07 10:25:43.886627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:43.886684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:43.886705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 [2024-12-07 10:25:43.886745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:43.886760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:43.886781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 [2024-12-07 10:25:43.886798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:43.886818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:43.886832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 [2024-12-07 10:25:43.886853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:43.886867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:43.886891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 10:25:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:45.320 10:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:45.320 [2024-12-07 10:25:44.383662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:45.320 [2024-12-07 10:25:44.385456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:44.385500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:44.385527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 [2024-12-07 10:25:44.385550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:44.385571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:44.385586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 [2024-12-07 10:25:44.385607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:44.385621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:44.385644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 [2024-12-07 10:25:44.385671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.320 [2024-12-07 10:25:44.385691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-07 10:25:44.385704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:45.320 10:25:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.320 10:25:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.320 10:25:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:45.320 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:45.579 10:25:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:57.793 10:25:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.793 10:25:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:57.793 10:25:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:57.793 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:57.793 [2024-12-07 10:25:56.863598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:57.793 [2024-12-07 10:25:56.865909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:57.794 [2024-12-07 10:25:56.866111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.794 [2024-12-07 10:25:56.866242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.794 [2024-12-07 10:25:56.866326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:57.794 [2024-12-07 10:25:56.866365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.794 [2024-12-07 10:25:56.866488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.794 [2024-12-07 10:25:56.866551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:57.794 [2024-12-07 10:25:56.866594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.794 [2024-12-07 10:25:56.866697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.794 [2024-12-07 10:25:56.866768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:57.794 [2024-12-07 10:25:56.866805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.794 [2024-12-07 10:25:56.866866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:57.794 10:25:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.794 10:25:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:57.794 10:25:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:57.794 10:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:58.054 [2024-12-07 10:25:57.262933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:58.054 [2024-12-07 10:25:57.264671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.054 [2024-12-07 10:25:57.264713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.054 [2024-12-07 10:25:57.264734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.054 [2024-12-07 10:25:57.264755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.054 [2024-12-07 10:25:57.264777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.054 [2024-12-07 10:25:57.264790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.054 [2024-12-07 10:25:57.264809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.054 [2024-12-07 10:25:57.264822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.054 [2024-12-07 10:25:57.264839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.054 [2024-12-07 10:25:57.264853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.054 [2024-12-07 10:25:57.264870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.054 [2024-12-07 10:25:57.264884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.314 10:25:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.314 10:25:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.314 10:25:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:58.314 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:58.573 10:25:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:10.792 10:26:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.792 10:26:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:10.792 10:26:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:10.792 [2024-12-07 10:26:09.942644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:10.792 [2024-12-07 10:26:09.944910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:10.792 [2024-12-07 10:26:09.945101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.792 [2024-12-07 10:26:09.945231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.792 [2024-12-07 10:26:09.945401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:10.792 [2024-12-07 10:26:09.945449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.792 [2024-12-07 10:26:09.945575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.792 [2024-12-07 10:26:09.945646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:10.792 [2024-12-07 10:26:09.945692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.792 [2024-12-07 10:26:09.945863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.792 [2024-12-07 10:26:09.946046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:10.792 [2024-12-07 10:26:09.946180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.792 [2024-12-07 10:26:09.946247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:10.792 10:26:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:10.792 10:26:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.792 10:26:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:10.792 10:26:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.792 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:10.792 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:11.362 [2024-12-07 10:26:10.441851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:11.362 [2024-12-07 10:26:10.443831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.362 [2024-12-07 10:26:10.443873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.362 [2024-12-07 10:26:10.443893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.362 [2024-12-07 10:26:10.443915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.362 [2024-12-07 10:26:10.443932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.362 [2024-12-07 10:26:10.443946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.362 [2024-12-07 10:26:10.443966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.362 [2024-12-07 10:26:10.443995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.362 [2024-12-07 10:26:10.444013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.362 [2024-12-07 10:26:10.444029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.362 [2024-12-07 10:26:10.444049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.362 [2024-12-07 10:26:10.444063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:11.362 10:26:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.362 10:26:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.362 10:26:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:11.362 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:11.621 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:11.621 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:11.621 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:11.621 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:11.621 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:11.622 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:11.622 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:11.622 10:26:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.14 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.14 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:14:23.968 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:23.968 10:26:22 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68616 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68616 ']' 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68616 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68616 00:14:23.968 10:26:22 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.968 10:26:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.968 10:26:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68616' 00:14:23.968 killing process with pid 68616 00:14:23.968 10:26:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68616 00:14:23.968 10:26:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68616 00:14:26.504 10:26:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:26.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:27.331 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:27.331 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:27.591 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.591 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.591 00:14:27.591 real 2m34.498s 00:14:27.591 user 1m51.827s 00:14:27.591 sys 0m22.790s 00:14:27.591 10:26:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.591 10:26:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:27.591 ************************************ 00:14:27.591 END TEST sw_hotplug 00:14:27.591 ************************************ 00:14:27.850 10:26:26 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:27.850 10:26:26 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:27.850 10:26:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.850 10:26:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.850 10:26:26 -- common/autotest_common.sh@10 -- # set +x 00:14:27.850 ************************************ 00:14:27.850 START TEST nvme_xnvme 00:14:27.850 ************************************ 00:14:27.850 10:26:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:27.850 * Looking for test storage... 00:14:27.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:27.850 10:26:27 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:27.850 10:26:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:27.850 10:26:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:27.850 10:26:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.850 10:26:27 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.113 10:26:27 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:28.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.113 --rc genhtml_branch_coverage=1 00:14:28.113 --rc genhtml_function_coverage=1 00:14:28.113 --rc genhtml_legend=1 00:14:28.113 --rc geninfo_all_blocks=1 00:14:28.113 --rc geninfo_unexecuted_blocks=1 00:14:28.113 00:14:28.113 ' 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:28.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.113 --rc genhtml_branch_coverage=1 00:14:28.113 --rc genhtml_function_coverage=1 00:14:28.113 --rc genhtml_legend=1 00:14:28.113 --rc geninfo_all_blocks=1 00:14:28.113 --rc geninfo_unexecuted_blocks=1 00:14:28.113 00:14:28.113 ' 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:28.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.113 --rc genhtml_branch_coverage=1 00:14:28.113 --rc genhtml_function_coverage=1 00:14:28.113 --rc genhtml_legend=1 00:14:28.113 --rc geninfo_all_blocks=1 00:14:28.113 --rc geninfo_unexecuted_blocks=1 00:14:28.113 00:14:28.113 ' 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:28.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.113 --rc genhtml_branch_coverage=1 00:14:28.113 --rc genhtml_function_coverage=1 00:14:28.113 --rc genhtml_legend=1 00:14:28.113 --rc geninfo_all_blocks=1 00:14:28.113 --rc geninfo_unexecuted_blocks=1 00:14:28.113 00:14:28.113 ' 00:14:28.113 10:26:27 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:28.113 10:26:27 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:28.113 10:26:27 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:28.113 10:26:27 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:28.114 10:26:27 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:28.114 10:26:27 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:28.114 10:26:27 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:28.114 #define SPDK_CONFIG_H 00:14:28.114 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:28.114 #define SPDK_CONFIG_APPS 1 00:14:28.114 #define SPDK_CONFIG_ARCH native 00:14:28.114 #define SPDK_CONFIG_ASAN 1 00:14:28.114 #undef SPDK_CONFIG_AVAHI 00:14:28.114 #undef SPDK_CONFIG_CET 00:14:28.114 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:28.114 #define SPDK_CONFIG_COVERAGE 1 00:14:28.114 #define SPDK_CONFIG_CROSS_PREFIX 00:14:28.114 #undef SPDK_CONFIG_CRYPTO 00:14:28.114 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:28.114 #undef SPDK_CONFIG_CUSTOMOCF 00:14:28.114 #undef SPDK_CONFIG_DAOS 00:14:28.114 #define SPDK_CONFIG_DAOS_DIR 00:14:28.114 #define SPDK_CONFIG_DEBUG 1 00:14:28.114 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:28.114 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:28.114 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:28.114 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:28.114 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:28.114 #undef SPDK_CONFIG_DPDK_UADK 00:14:28.114 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:28.114 #define SPDK_CONFIG_EXAMPLES 1 00:14:28.114 #undef SPDK_CONFIG_FC 00:14:28.114 #define SPDK_CONFIG_FC_PATH 00:14:28.114 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:28.114 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:28.114 #define SPDK_CONFIG_FSDEV 1 00:14:28.114 #undef SPDK_CONFIG_FUSE 00:14:28.114 #undef SPDK_CONFIG_FUZZER 00:14:28.114 #define SPDK_CONFIG_FUZZER_LIB 00:14:28.114 #undef SPDK_CONFIG_GOLANG 00:14:28.114 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:28.114 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:28.114 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:28.114 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:28.114 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:28.114 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:28.114 #undef SPDK_CONFIG_HAVE_LZ4 00:14:28.114 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:28.114 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:28.114 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:28.115 #define SPDK_CONFIG_IDXD 1 00:14:28.115 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:28.115 #undef SPDK_CONFIG_IPSEC_MB 00:14:28.115 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:28.115 #define SPDK_CONFIG_ISAL 1 00:14:28.115 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:28.115 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:28.115 #define SPDK_CONFIG_LIBDIR 00:14:28.115 #undef SPDK_CONFIG_LTO 00:14:28.115 #define SPDK_CONFIG_MAX_LCORES 128 00:14:28.115 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:28.115 #define SPDK_CONFIG_NVME_CUSE 1 00:14:28.115 #undef SPDK_CONFIG_OCF 00:14:28.115 #define SPDK_CONFIG_OCF_PATH 00:14:28.115 #define SPDK_CONFIG_OPENSSL_PATH 00:14:28.115 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:28.115 #define SPDK_CONFIG_PGO_DIR 00:14:28.115 #undef SPDK_CONFIG_PGO_USE 00:14:28.115 #define SPDK_CONFIG_PREFIX /usr/local 00:14:28.115 #undef SPDK_CONFIG_RAID5F 00:14:28.115 #undef SPDK_CONFIG_RBD 00:14:28.115 #define SPDK_CONFIG_RDMA 1 00:14:28.115 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:28.115 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:28.115 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:28.115 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:28.115 #define SPDK_CONFIG_SHARED 1 00:14:28.115 #undef SPDK_CONFIG_SMA 00:14:28.115 #define SPDK_CONFIG_TESTS 1 00:14:28.115 #undef SPDK_CONFIG_TSAN 00:14:28.115 #define SPDK_CONFIG_UBLK 1 00:14:28.115 #define SPDK_CONFIG_UBSAN 1 00:14:28.115 #undef SPDK_CONFIG_UNIT_TESTS 00:14:28.115 #undef SPDK_CONFIG_URING 00:14:28.115 #define SPDK_CONFIG_URING_PATH 00:14:28.115 #undef SPDK_CONFIG_URING_ZNS 00:14:28.115 #undef SPDK_CONFIG_USDT 00:14:28.115 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:28.115 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:28.115 #undef SPDK_CONFIG_VFIO_USER 00:14:28.115 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:28.115 #define SPDK_CONFIG_VHOST 1 00:14:28.115 #define SPDK_CONFIG_VIRTIO 1 00:14:28.115 #undef SPDK_CONFIG_VTUNE 00:14:28.115 #define SPDK_CONFIG_VTUNE_DIR 00:14:28.115 #define SPDK_CONFIG_WERROR 1 00:14:28.115 #define SPDK_CONFIG_WPDK_DIR 00:14:28.115 #define SPDK_CONFIG_XNVME 1 00:14:28.115 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:28.115 10:26:27 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.115 10:26:27 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.115 10:26:27 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.115 10:26:27 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.115 10:26:27 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.115 10:26:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.115 10:26:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.115 10:26:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.115 10:26:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:28.115 10:26:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:28.115 10:26:27 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:28.115 10:26:27 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:28.116 10:26:27 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69980 ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69980 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.T8vgov 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.T8vgov/tests/xnvme /tmp/spdk.T8vgov 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13960802304 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5606887424 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13960802304 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5606887424 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97923551232 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1779228672 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:28.117 * Looking for test storage... 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13960802304 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:28.117 10:26:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:28.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:28.118 10:26:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:28.378 10:26:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:28.378 10:26:27 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.378 10:26:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:28.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.378 --rc genhtml_branch_coverage=1 00:14:28.378 --rc genhtml_function_coverage=1 00:14:28.378 --rc genhtml_legend=1 00:14:28.378 --rc geninfo_all_blocks=1 00:14:28.378 --rc geninfo_unexecuted_blocks=1 00:14:28.378 00:14:28.378 ' 00:14:28.378 10:26:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:28.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.378 --rc genhtml_branch_coverage=1 00:14:28.378 --rc genhtml_function_coverage=1 00:14:28.378 --rc genhtml_legend=1 00:14:28.378 --rc geninfo_all_blocks=1 00:14:28.378 --rc geninfo_unexecuted_blocks=1 00:14:28.378 00:14:28.378 ' 00:14:28.378 10:26:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:28.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.378 --rc genhtml_branch_coverage=1 00:14:28.378 --rc genhtml_function_coverage=1 00:14:28.378 --rc genhtml_legend=1 00:14:28.378 --rc geninfo_all_blocks=1 00:14:28.378 --rc geninfo_unexecuted_blocks=1 00:14:28.378 00:14:28.378 ' 00:14:28.378 10:26:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:28.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.378 --rc genhtml_branch_coverage=1 00:14:28.378 --rc genhtml_function_coverage=1 00:14:28.378 --rc genhtml_legend=1 00:14:28.378 --rc geninfo_all_blocks=1 00:14:28.378 --rc geninfo_unexecuted_blocks=1 00:14:28.378 00:14:28.378 ' 00:14:28.378 10:26:27 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.378 10:26:27 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.378 10:26:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.378 10:26:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.378 10:26:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.378 10:26:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:28.378 10:26:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:28.379 10:26:27 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:28.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:29.206 Waiting for block devices as requested 00:14:29.206 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:29.465 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:29.465 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:29.465 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:34.743 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:34.743 10:26:33 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:35.003 10:26:34 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:35.003 10:26:34 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:35.263 10:26:34 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:35.263 10:26:34 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:35.263 10:26:34 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:35.263 10:26:34 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:35.263 10:26:34 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:35.522 No valid GPT data, bailing 00:14:35.522 10:26:34 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:35.522 10:26:34 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:35.522 10:26:34 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:35.522 10:26:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:35.522 10:26:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:35.522 10:26:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.522 10:26:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.522 ************************************ 00:14:35.522 START TEST xnvme_rpc 00:14:35.522 ************************************ 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70378 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70378 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70378 ']' 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.522 10:26:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.522 [2024-12-07 10:26:34.785290] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:35.522 [2024-12-07 10:26:34.785620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70378 ] 00:14:35.781 [2024-12-07 10:26:34.978288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.781 [2024-12-07 10:26:35.109217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.157 xnvme_bdev 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.157 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70378 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70378 ']' 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70378 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70378 00:14:37.158 killing process with pid 70378 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70378' 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70378 00:14:37.158 10:26:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70378 00:14:39.696 ************************************ 00:14:39.696 END TEST xnvme_rpc 00:14:39.696 ************************************ 00:14:39.696 00:14:39.696 real 0m4.150s 00:14:39.696 user 0m3.993s 00:14:39.696 sys 0m0.713s 00:14:39.696 10:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.696 10:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.696 10:26:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:39.696 10:26:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:39.696 10:26:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.696 10:26:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.696 ************************************ 00:14:39.696 START TEST xnvme_bdevperf 00:14:39.696 ************************************ 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:39.696 10:26:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.696 { 00:14:39.696 "subsystems": [ 00:14:39.696 { 00:14:39.696 "subsystem": "bdev", 00:14:39.696 "config": [ 00:14:39.696 { 00:14:39.696 "params": { 00:14:39.696 "io_mechanism": "libaio", 00:14:39.696 "conserve_cpu": false, 00:14:39.696 "filename": "/dev/nvme0n1", 00:14:39.696 "name": "xnvme_bdev" 00:14:39.696 }, 00:14:39.696 "method": "bdev_xnvme_create" 00:14:39.696 }, 00:14:39.696 { 00:14:39.696 "method": "bdev_wait_for_examine" 00:14:39.696 } 00:14:39.696 ] 00:14:39.696 } 00:14:39.696 ] 00:14:39.696 } 00:14:39.696 [2024-12-07 10:26:38.993116] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:39.696 [2024-12-07 10:26:38.993248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70462 ] 00:14:39.956 [2024-12-07 10:26:39.175909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.956 [2024-12-07 10:26:39.306157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.524 Running I/O for 5 seconds... 00:14:42.394 29972.00 IOPS, 117.08 MiB/s [2024-12-07T10:26:43.120Z] 29717.50 IOPS, 116.08 MiB/s [2024-12-07T10:26:44.053Z] 29655.67 IOPS, 115.84 MiB/s [2024-12-07T10:26:44.991Z] 29724.25 IOPS, 116.11 MiB/s [2024-12-07T10:26:44.991Z] 29677.20 IOPS, 115.93 MiB/s 00:14:45.638 Latency(us) 00:14:45.638 [2024-12-07T10:26:44.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.638 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:45.638 xnvme_bdev : 5.01 29660.27 115.86 0.00 0.00 2154.04 631.67 4869.14 00:14:45.638 [2024-12-07T10:26:44.991Z] =================================================================================================================== 00:14:45.638 [2024-12-07T10:26:44.991Z] Total : 29660.27 115.86 0.00 0.00 2154.04 631.67 4869.14 00:14:47.017 10:26:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.017 10:26:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.017 10:26:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:47.017 10:26:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.017 10:26:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.017 { 00:14:47.017 "subsystems": [ 00:14:47.017 { 00:14:47.017 "subsystem": "bdev", 00:14:47.017 "config": [ 00:14:47.017 { 00:14:47.017 "params": { 00:14:47.017 "io_mechanism": "libaio", 00:14:47.017 "conserve_cpu": false, 00:14:47.017 "filename": "/dev/nvme0n1", 00:14:47.017 "name": "xnvme_bdev" 00:14:47.017 }, 00:14:47.017 "method": "bdev_xnvme_create" 00:14:47.017 }, 00:14:47.017 { 00:14:47.017 "method": "bdev_wait_for_examine" 00:14:47.017 } 00:14:47.017 ] 00:14:47.017 } 00:14:47.017 ] 00:14:47.017 } 00:14:47.017 [2024-12-07 10:26:46.034420] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:47.017 [2024-12-07 10:26:46.034551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70545 ] 00:14:47.017 [2024-12-07 10:26:46.219937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.017 [2024-12-07 10:26:46.347305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.585 Running I/O for 5 seconds... 00:14:49.450 31846.00 IOPS, 124.40 MiB/s [2024-12-07T10:26:50.177Z] 32016.00 IOPS, 125.06 MiB/s [2024-12-07T10:26:51.111Z] 31979.00 IOPS, 124.92 MiB/s [2024-12-07T10:26:52.045Z] 34316.75 IOPS, 134.05 MiB/s [2024-12-07T10:26:52.045Z] 36903.80 IOPS, 144.16 MiB/s 00:14:52.692 Latency(us) 00:14:52.692 [2024-12-07T10:26:52.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.692 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:52.692 xnvme_bdev : 5.00 36881.91 144.07 0.00 0.00 1731.49 309.26 5132.34 00:14:52.692 [2024-12-07T10:26:52.045Z] =================================================================================================================== 00:14:52.692 [2024-12-07T10:26:52.045Z] Total : 36881.91 144.07 0.00 0.00 1731.49 309.26 5132.34 00:14:54.071 00:14:54.072 real 0m14.092s 00:14:54.072 user 0m4.720s 00:14:54.072 sys 0m6.736s 00:14:54.072 10:26:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.072 10:26:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 ************************************ 00:14:54.072 END TEST xnvme_bdevperf 00:14:54.072 ************************************ 00:14:54.072 10:26:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:54.072 10:26:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:54.072 10:26:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.072 10:26:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 ************************************ 00:14:54.072 START TEST xnvme_fio_plugin 00:14:54.072 ************************************ 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:54.072 10:26:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.072 { 00:14:54.072 "subsystems": [ 00:14:54.072 { 00:14:54.072 "subsystem": "bdev", 00:14:54.072 "config": [ 00:14:54.072 { 00:14:54.072 "params": { 00:14:54.072 "io_mechanism": "libaio", 00:14:54.072 "conserve_cpu": false, 00:14:54.072 "filename": "/dev/nvme0n1", 00:14:54.072 "name": "xnvme_bdev" 00:14:54.072 }, 00:14:54.072 "method": "bdev_xnvme_create" 00:14:54.072 }, 00:14:54.072 { 00:14:54.072 "method": "bdev_wait_for_examine" 00:14:54.072 } 00:14:54.072 ] 00:14:54.072 } 00:14:54.072 ] 00:14:54.072 } 00:14:54.072 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:54.072 fio-3.35 00:14:54.072 Starting 1 thread 00:15:00.741 00:15:00.741 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70670: Sat Dec 7 10:26:59 2024 00:15:00.741 read: IOPS=43.5k, BW=170MiB/s (178MB/s)(850MiB/5001msec) 00:15:00.741 slat (usec): min=4, max=877, avg=20.37, stdev=24.57 00:15:00.741 clat (usec): min=33, max=9497, avg=856.52, stdev=565.74 00:15:00.741 lat (usec): min=106, max=9554, avg=876.88, stdev=569.95 00:15:00.741 clat percentiles (usec): 00:15:00.741 | 1.00th=[ 163], 5.00th=[ 235], 10.00th=[ 306], 20.00th=[ 424], 00:15:00.741 | 30.00th=[ 537], 40.00th=[ 652], 50.00th=[ 758], 60.00th=[ 873], 00:15:00.741 | 70.00th=[ 996], 80.00th=[ 1156], 90.00th=[ 1450], 95.00th=[ 1811], 00:15:00.741 | 99.00th=[ 3163], 99.50th=[ 3720], 99.90th=[ 4686], 99.95th=[ 5145], 00:15:00.741 | 99.99th=[ 5997] 00:15:00.741 bw ( KiB/s): min=143328, max=215752, per=100.00%, avg=175888.00, stdev=26374.21, samples=9 00:15:00.741 iops : min=35832, max=53938, avg=43972.00, stdev=6593.55, samples=9 00:15:00.741 lat (usec) : 50=0.01%, 100=0.07%, 250=5.98%, 500=20.49%, 750=22.45% 00:15:00.741 lat (usec) : 1000=21.32% 00:15:00.741 lat (msec) : 2=26.00%, 4=3.34%, 10=0.35% 00:15:00.741 cpu : usr=24.26%, sys=53.02%, ctx=65, majf=0, minf=764 00:15:00.741 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=11.4%, 16=26.3%, 32=55.6%, >=64=1.8% 00:15:00.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.741 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:00.741 issued rwts: total=217479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:00.741 00:15:00.741 Run status group 0 (all jobs): 00:15:00.741 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=850MiB (891MB), run=5001-5001msec 00:15:01.311 ----------------------------------------------------- 00:15:01.311 Suppressions used: 00:15:01.311 count bytes template 00:15:01.311 1 11 /usr/src/fio/parse.c 00:15:01.311 1 8 libtcmalloc_minimal.so 00:15:01.311 1 904 libcrypto.so 00:15:01.311 ----------------------------------------------------- 00:15:01.311 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.311 10:27:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.311 { 00:15:01.311 "subsystems": [ 00:15:01.311 { 00:15:01.311 "subsystem": "bdev", 00:15:01.311 "config": [ 00:15:01.311 { 00:15:01.311 "params": { 00:15:01.311 "io_mechanism": "libaio", 00:15:01.311 "conserve_cpu": false, 00:15:01.311 "filename": "/dev/nvme0n1", 00:15:01.311 "name": "xnvme_bdev" 00:15:01.311 }, 00:15:01.311 "method": "bdev_xnvme_create" 00:15:01.311 }, 00:15:01.311 { 00:15:01.311 "method": "bdev_wait_for_examine" 00:15:01.311 } 00:15:01.311 ] 00:15:01.311 } 00:15:01.311 ] 00:15:01.311 } 00:15:01.571 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.571 fio-3.35 00:15:01.571 Starting 1 thread 00:15:08.145 00:15:08.145 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70767: Sat Dec 7 10:27:06 2024 00:15:08.145 write: IOPS=46.4k, BW=181MiB/s (190MB/s)(907MiB/5001msec); 0 zone resets 00:15:08.145 slat (usec): min=4, max=695, avg=18.90, stdev=29.39 00:15:08.145 clat (usec): min=83, max=7360, avg=812.00, stdev=485.70 00:15:08.145 lat (usec): min=129, max=7365, avg=830.90, stdev=488.46 00:15:08.145 clat percentiles (usec): 00:15:08.145 | 1.00th=[ 176], 5.00th=[ 255], 10.00th=[ 326], 20.00th=[ 441], 00:15:08.145 | 30.00th=[ 545], 40.00th=[ 644], 50.00th=[ 742], 60.00th=[ 848], 00:15:08.145 | 70.00th=[ 955], 80.00th=[ 1090], 90.00th=[ 1287], 95.00th=[ 1565], 00:15:08.145 | 99.00th=[ 2769], 99.50th=[ 3425], 99.90th=[ 4359], 99.95th=[ 4686], 00:15:08.145 | 99.99th=[ 5276] 00:15:08.145 bw ( KiB/s): min=139992, max=225088, per=99.46%, avg=184766.22, stdev=27374.13, samples=9 00:15:08.145 iops : min=34998, max=56272, avg=46191.56, stdev=6843.53, samples=9 00:15:08.145 lat (usec) : 100=0.09%, 250=4.52%, 500=21.38%, 750=24.68%, 1000=23.00% 00:15:08.145 lat (msec) : 2=24.10%, 4=2.04%, 10=0.20% 00:15:08.145 cpu : usr=26.14%, sys=56.02%, ctx=51, majf=0, minf=765 00:15:08.145 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=10.8%, 16=26.1%, 32=56.5%, >=64=1.8% 00:15:08.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.145 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:08.145 issued rwts: total=0,232269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.145 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.145 00:15:08.145 Run status group 0 (all jobs): 00:15:08.145 WRITE: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=907MiB (951MB), run=5001-5001msec 00:15:08.711 ----------------------------------------------------- 00:15:08.711 Suppressions used: 00:15:08.711 count bytes template 00:15:08.711 1 11 /usr/src/fio/parse.c 00:15:08.711 1 8 libtcmalloc_minimal.so 00:15:08.711 1 904 libcrypto.so 00:15:08.711 ----------------------------------------------------- 00:15:08.711 00:15:08.711 00:15:08.711 real 0m14.852s 00:15:08.711 user 0m6.216s 00:15:08.711 sys 0m6.288s 00:15:08.711 ************************************ 00:15:08.711 END TEST xnvme_fio_plugin 00:15:08.711 ************************************ 00:15:08.711 10:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.711 10:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.711 10:27:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:08.711 10:27:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:08.711 10:27:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:08.711 10:27:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:08.711 10:27:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.711 10:27:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.711 10:27:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.711 ************************************ 00:15:08.711 START TEST xnvme_rpc 00:15:08.711 ************************************ 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70854 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70854 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70854 ']' 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.711 10:27:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.970 [2024-12-07 10:27:08.096367] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:08.970 [2024-12-07 10:27:08.096489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70854 ] 00:15:08.970 [2024-12-07 10:27:08.270950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.228 [2024-12-07 10:27:08.376178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.166 xnvme_bdev 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.166 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70854 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70854 ']' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70854 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70854 00:15:10.167 killing process with pid 70854 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70854' 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70854 00:15:10.167 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70854 00:15:12.705 ************************************ 00:15:12.705 END TEST xnvme_rpc 00:15:12.706 ************************************ 00:15:12.706 00:15:12.706 real 0m3.819s 00:15:12.706 user 0m3.857s 00:15:12.706 sys 0m0.545s 00:15:12.706 10:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.706 10:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.706 10:27:11 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:12.706 10:27:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.706 10:27:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.706 10:27:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.706 ************************************ 00:15:12.706 START TEST xnvme_bdevperf 00:15:12.706 ************************************ 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:12.706 10:27:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:12.706 { 00:15:12.706 "subsystems": [ 00:15:12.706 { 00:15:12.706 "subsystem": "bdev", 00:15:12.706 "config": [ 00:15:12.706 { 00:15:12.706 "params": { 00:15:12.706 "io_mechanism": "libaio", 00:15:12.706 "conserve_cpu": true, 00:15:12.706 "filename": "/dev/nvme0n1", 00:15:12.706 "name": "xnvme_bdev" 00:15:12.706 }, 00:15:12.706 "method": "bdev_xnvme_create" 00:15:12.706 }, 00:15:12.706 { 00:15:12.706 "method": "bdev_wait_for_examine" 00:15:12.706 } 00:15:12.706 ] 00:15:12.706 } 00:15:12.706 ] 00:15:12.706 } 00:15:12.706 [2024-12-07 10:27:11.976432] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:12.706 [2024-12-07 10:27:11.976543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:15:12.965 [2024-12-07 10:27:12.154045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.965 [2024-12-07 10:27:12.267751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.535 Running I/O for 5 seconds... 00:15:15.401 39155.00 IOPS, 152.95 MiB/s [2024-12-07T10:27:15.684Z] 39805.50 IOPS, 155.49 MiB/s [2024-12-07T10:27:17.056Z] 39216.33 IOPS, 153.19 MiB/s [2024-12-07T10:27:17.623Z] 39595.00 IOPS, 154.67 MiB/s [2024-12-07T10:27:17.623Z] 39944.00 IOPS, 156.03 MiB/s 00:15:18.270 Latency(us) 00:15:18.270 [2024-12-07T10:27:17.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.270 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:18.270 xnvme_bdev : 5.00 39926.80 155.96 0.00 0.00 1598.91 539.55 5369.21 00:15:18.270 [2024-12-07T10:27:17.623Z] =================================================================================================================== 00:15:18.270 [2024-12-07T10:27:17.623Z] Total : 39926.80 155.96 0.00 0.00 1598.91 539.55 5369.21 00:15:19.650 10:27:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:19.650 10:27:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:19.650 10:27:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:19.650 10:27:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:19.650 10:27:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:19.650 { 00:15:19.650 "subsystems": [ 00:15:19.650 { 00:15:19.650 "subsystem": "bdev", 00:15:19.650 "config": [ 00:15:19.650 { 00:15:19.650 "params": { 00:15:19.650 "io_mechanism": "libaio", 00:15:19.650 "conserve_cpu": true, 00:15:19.650 "filename": "/dev/nvme0n1", 00:15:19.650 "name": "xnvme_bdev" 00:15:19.650 }, 00:15:19.650 "method": "bdev_xnvme_create" 00:15:19.650 }, 00:15:19.650 { 00:15:19.650 "method": "bdev_wait_for_examine" 00:15:19.650 } 00:15:19.650 ] 00:15:19.650 } 00:15:19.650 ] 00:15:19.650 } 00:15:19.650 [2024-12-07 10:27:18.828108] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:19.650 [2024-12-07 10:27:18.828361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71018 ] 00:15:19.910 [2024-12-07 10:27:19.006459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.910 [2024-12-07 10:27:19.118077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.169 Running I/O for 5 seconds... 00:15:22.481 42215.00 IOPS, 164.90 MiB/s [2024-12-07T10:27:22.774Z] 42673.50 IOPS, 166.69 MiB/s [2024-12-07T10:27:23.712Z] 40887.33 IOPS, 159.72 MiB/s [2024-12-07T10:27:24.649Z] 39869.00 IOPS, 155.74 MiB/s [2024-12-07T10:27:24.649Z] 39505.20 IOPS, 154.32 MiB/s 00:15:25.296 Latency(us) 00:15:25.296 [2024-12-07T10:27:24.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.296 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:25.296 xnvme_bdev : 5.01 39434.14 154.04 0.00 0.00 1619.08 233.59 9475.08 00:15:25.296 [2024-12-07T10:27:24.649Z] =================================================================================================================== 00:15:25.296 [2024-12-07T10:27:24.649Z] Total : 39434.14 154.04 0.00 0.00 1619.08 233.59 9475.08 00:15:26.686 00:15:26.686 real 0m13.770s 00:15:26.686 user 0m5.034s 00:15:26.686 sys 0m6.131s 00:15:26.686 10:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.686 10:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:26.686 ************************************ 00:15:26.686 END TEST xnvme_bdevperf 00:15:26.686 ************************************ 00:15:26.686 10:27:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:26.686 10:27:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:26.686 10:27:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.686 10:27:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.686 ************************************ 00:15:26.686 START TEST xnvme_fio_plugin 00:15:26.686 ************************************ 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:26.686 { 00:15:26.686 "subsystems": [ 00:15:26.686 { 00:15:26.686 "subsystem": "bdev", 00:15:26.686 "config": [ 00:15:26.686 { 00:15:26.686 "params": { 00:15:26.686 "io_mechanism": "libaio", 00:15:26.686 "conserve_cpu": true, 00:15:26.686 "filename": "/dev/nvme0n1", 00:15:26.686 "name": "xnvme_bdev" 00:15:26.686 }, 00:15:26.686 "method": "bdev_xnvme_create" 00:15:26.686 }, 00:15:26.686 { 00:15:26.686 "method": "bdev_wait_for_examine" 00:15:26.686 } 00:15:26.686 ] 00:15:26.686 } 00:15:26.686 ] 00:15:26.686 } 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:26.686 10:27:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:26.686 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:26.686 fio-3.35 00:15:26.686 Starting 1 thread 00:15:33.285 00:15:33.285 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71137: Sat Dec 7 10:27:31 2024 00:15:33.285 read: IOPS=45.3k, BW=177MiB/s (185MB/s)(884MiB/5001msec) 00:15:33.285 slat (usec): min=4, max=1297, avg=19.24, stdev=27.52 00:15:33.285 clat (usec): min=56, max=5319, avg=847.50, stdev=522.69 00:15:33.285 lat (usec): min=70, max=5438, avg=866.73, stdev=526.77 00:15:33.285 clat percentiles (usec): 00:15:33.285 | 1.00th=[ 186], 5.00th=[ 262], 10.00th=[ 334], 20.00th=[ 461], 00:15:33.285 | 30.00th=[ 570], 40.00th=[ 676], 50.00th=[ 783], 60.00th=[ 881], 00:15:33.285 | 70.00th=[ 979], 80.00th=[ 1106], 90.00th=[ 1303], 95.00th=[ 1598], 00:15:33.285 | 99.00th=[ 3163], 99.50th=[ 3720], 99.90th=[ 4424], 99.95th=[ 4621], 00:15:33.285 | 99.99th=[ 5014] 00:15:33.285 bw ( KiB/s): min=171512, max=197408, per=99.46%, avg=180061.33, stdev=8698.70, samples=9 00:15:33.285 iops : min=42878, max=49352, avg=45015.33, stdev=2174.67, samples=9 00:15:33.285 lat (usec) : 100=0.03%, 250=4.21%, 500=19.27%, 750=23.67%, 1000=24.38% 00:15:33.285 lat (msec) : 2=25.25%, 4=2.86%, 10=0.32% 00:15:33.285 cpu : usr=27.60%, sys=52.92%, ctx=82, majf=0, minf=764 00:15:33.285 IO depths : 1=0.1%, 2=0.8%, 4=3.8%, 8=10.8%, 16=25.8%, 32=56.8%, >=64=1.8% 00:15:33.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.285 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:33.285 issued rwts: total=226345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:33.285 00:15:33.285 Run status group 0 (all jobs): 00:15:33.285 READ: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=884MiB (927MB), run=5001-5001msec 00:15:33.853 ----------------------------------------------------- 00:15:33.853 Suppressions used: 00:15:33.853 count bytes template 00:15:33.853 1 11 /usr/src/fio/parse.c 00:15:33.853 1 8 libtcmalloc_minimal.so 00:15:33.853 1 904 libcrypto.so 00:15:33.853 ----------------------------------------------------- 00:15:33.853 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:33.853 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:33.853 { 00:15:33.853 "subsystems": [ 00:15:33.853 { 00:15:33.853 "subsystem": "bdev", 00:15:33.853 "config": [ 00:15:33.853 { 00:15:33.853 "params": { 00:15:33.853 "io_mechanism": "libaio", 00:15:33.854 "conserve_cpu": true, 00:15:33.854 "filename": "/dev/nvme0n1", 00:15:33.854 "name": "xnvme_bdev" 00:15:33.854 }, 00:15:33.854 "method": "bdev_xnvme_create" 00:15:33.854 }, 00:15:33.854 { 00:15:33.854 "method": "bdev_wait_for_examine" 00:15:33.854 } 00:15:33.854 ] 00:15:33.854 } 00:15:33.854 ] 00:15:33.854 } 00:15:33.854 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.854 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.854 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:33.854 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.854 10:27:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.113 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.114 fio-3.35 00:15:34.114 Starting 1 thread 00:15:40.683 00:15:40.683 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71240: Sat Dec 7 10:27:39 2024 00:15:40.683 write: IOPS=43.4k, BW=169MiB/s (178MB/s)(847MiB/5001msec); 0 zone resets 00:15:40.683 slat (usec): min=4, max=794, avg=20.22, stdev=29.47 00:15:40.683 clat (usec): min=87, max=5272, avg=869.32, stdev=502.77 00:15:40.683 lat (usec): min=105, max=5316, avg=889.53, stdev=505.64 00:15:40.683 clat percentiles (usec): 00:15:40.683 | 1.00th=[ 190], 5.00th=[ 265], 10.00th=[ 338], 20.00th=[ 465], 00:15:40.683 | 30.00th=[ 586], 40.00th=[ 701], 50.00th=[ 816], 60.00th=[ 922], 00:15:40.683 | 70.00th=[ 1045], 80.00th=[ 1172], 90.00th=[ 1369], 95.00th=[ 1582], 00:15:40.683 | 99.00th=[ 2933], 99.50th=[ 3523], 99.90th=[ 4293], 99.95th=[ 4490], 00:15:40.683 | 99.99th=[ 4883] 00:15:40.683 bw ( KiB/s): min=165536, max=184776, per=100.00%, avg=175909.33, stdev=5454.78, samples=9 00:15:40.683 iops : min=41384, max=46202, avg=43977.78, stdev=1365.03, samples=9 00:15:40.683 lat (usec) : 100=0.05%, 250=4.00%, 500=19.00%, 750=21.41%, 1000=22.05% 00:15:40.683 lat (msec) : 2=30.93%, 4=2.35%, 10=0.21% 00:15:40.683 cpu : usr=25.60%, sys=55.28%, ctx=51, majf=0, minf=765 00:15:40.683 IO depths : 1=0.1%, 2=0.9%, 4=4.1%, 8=11.4%, 16=26.1%, 32=55.5%, >=64=1.8% 00:15:40.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.683 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:40.683 issued rwts: total=0,216844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.683 00:15:40.683 Run status group 0 (all jobs): 00:15:40.683 WRITE: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=847MiB (888MB), run=5001-5001msec 00:15:41.252 ----------------------------------------------------- 00:15:41.252 Suppressions used: 00:15:41.252 count bytes template 00:15:41.252 1 11 /usr/src/fio/parse.c 00:15:41.252 1 8 libtcmalloc_minimal.so 00:15:41.252 1 904 libcrypto.so 00:15:41.252 ----------------------------------------------------- 00:15:41.252 00:15:41.252 00:15:41.252 real 0m14.714s 00:15:41.252 user 0m6.264s 00:15:41.252 sys 0m6.168s 00:15:41.252 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.252 ************************************ 00:15:41.252 END TEST xnvme_fio_plugin 00:15:41.252 ************************************ 00:15:41.252 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:41.252 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:41.252 10:27:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.252 10:27:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.252 10:27:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.252 ************************************ 00:15:41.252 START TEST xnvme_rpc 00:15:41.252 ************************************ 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71326 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71326 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71326 ']' 00:15:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.252 10:27:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.512 [2024-12-07 10:27:40.642408] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:41.512 [2024-12-07 10:27:40.642551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71326 ] 00:15:41.512 [2024-12-07 10:27:40.823266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.771 [2024-12-07 10:27:40.931192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.710 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.711 xnvme_bdev 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71326 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71326 ']' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71326 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.711 10:27:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71326 00:15:42.711 killing process with pid 71326 00:15:42.711 10:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.711 10:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.711 10:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71326' 00:15:42.711 10:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71326 00:15:42.711 10:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71326 00:15:45.252 00:15:45.252 real 0m3.784s 00:15:45.252 user 0m3.840s 00:15:45.252 sys 0m0.556s 00:15:45.252 ************************************ 00:15:45.252 END TEST xnvme_rpc 00:15:45.252 ************************************ 00:15:45.252 10:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.252 10:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.252 10:27:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:45.252 10:27:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:45.252 10:27:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.252 10:27:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.252 ************************************ 00:15:45.252 START TEST xnvme_bdevperf 00:15:45.252 ************************************ 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:45.252 10:27:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:45.252 { 00:15:45.252 "subsystems": [ 00:15:45.252 { 00:15:45.252 "subsystem": "bdev", 00:15:45.252 "config": [ 00:15:45.252 { 00:15:45.252 "params": { 00:15:45.252 "io_mechanism": "io_uring", 00:15:45.252 "conserve_cpu": false, 00:15:45.252 "filename": "/dev/nvme0n1", 00:15:45.252 "name": "xnvme_bdev" 00:15:45.252 }, 00:15:45.252 "method": "bdev_xnvme_create" 00:15:45.252 }, 00:15:45.252 { 00:15:45.252 "method": "bdev_wait_for_examine" 00:15:45.252 } 00:15:45.252 ] 00:15:45.252 } 00:15:45.252 ] 00:15:45.252 } 00:15:45.252 [2024-12-07 10:27:44.481417] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:45.252 [2024-12-07 10:27:44.481681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 00:15:45.512 [2024-12-07 10:27:44.667008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.512 [2024-12-07 10:27:44.782736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.770 Running I/O for 5 seconds... 00:15:48.078 54912.00 IOPS, 214.50 MiB/s [2024-12-07T10:27:48.364Z] 53440.00 IOPS, 208.75 MiB/s [2024-12-07T10:27:49.299Z] 53333.33 IOPS, 208.33 MiB/s [2024-12-07T10:27:50.236Z] 52800.00 IOPS, 206.25 MiB/s [2024-12-07T10:27:50.236Z] 52057.60 IOPS, 203.35 MiB/s 00:15:50.883 Latency(us) 00:15:50.883 [2024-12-07T10:27:50.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.883 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:50.883 xnvme_bdev : 5.00 52021.61 203.21 0.00 0.00 1227.00 786.30 4684.90 00:15:50.883 [2024-12-07T10:27:50.236Z] =================================================================================================================== 00:15:50.883 [2024-12-07T10:27:50.236Z] Total : 52021.61 203.21 0.00 0.00 1227.00 786.30 4684.90 00:15:52.259 10:27:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:52.259 10:27:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:52.259 10:27:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:52.259 10:27:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:52.259 10:27:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:52.259 { 00:15:52.259 "subsystems": [ 00:15:52.260 { 00:15:52.260 "subsystem": "bdev", 00:15:52.260 "config": [ 00:15:52.260 { 00:15:52.260 "params": { 00:15:52.260 "io_mechanism": "io_uring", 00:15:52.260 "conserve_cpu": false, 00:15:52.260 "filename": "/dev/nvme0n1", 00:15:52.260 "name": "xnvme_bdev" 00:15:52.260 }, 00:15:52.260 "method": "bdev_xnvme_create" 00:15:52.260 }, 00:15:52.260 { 00:15:52.260 "method": "bdev_wait_for_examine" 00:15:52.260 } 00:15:52.260 ] 00:15:52.260 } 00:15:52.260 ] 00:15:52.260 } 00:15:52.260 [2024-12-07 10:27:51.291774] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:52.260 [2024-12-07 10:27:51.292115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71481 ] 00:15:52.260 [2024-12-07 10:27:51.471622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.260 [2024-12-07 10:27:51.584844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.827 Running I/O for 5 seconds... 00:15:54.702 23936.00 IOPS, 93.50 MiB/s [2024-12-07T10:27:54.991Z] 23616.00 IOPS, 92.25 MiB/s [2024-12-07T10:27:55.926Z] 23274.67 IOPS, 90.92 MiB/s [2024-12-07T10:27:57.304Z] 23120.00 IOPS, 90.31 MiB/s 00:15:57.951 Latency(us) 00:15:57.951 [2024-12-07T10:27:57.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.951 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:57.951 xnvme_bdev : 5.00 23547.14 91.98 0.00 0.00 2709.32 1579.18 7737.99 00:15:57.951 [2024-12-07T10:27:57.304Z] =================================================================================================================== 00:15:57.951 [2024-12-07T10:27:57.304Z] Total : 23547.14 91.98 0.00 0.00 2709.32 1579.18 7737.99 00:15:58.891 00:15:58.891 real 0m13.630s 00:15:58.891 user 0m6.505s 00:15:58.891 sys 0m6.903s 00:15:58.891 ************************************ 00:15:58.891 END TEST xnvme_bdevperf 00:15:58.891 ************************************ 00:15:58.891 10:27:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.891 10:27:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:58.891 10:27:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:58.891 10:27:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.891 10:27:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.891 10:27:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.891 ************************************ 00:15:58.891 START TEST xnvme_fio_plugin 00:15:58.891 ************************************ 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:58.891 { 00:15:58.891 "subsystems": [ 00:15:58.891 { 00:15:58.891 "subsystem": "bdev", 00:15:58.891 "config": [ 00:15:58.891 { 00:15:58.891 "params": { 00:15:58.891 "io_mechanism": "io_uring", 00:15:58.891 "conserve_cpu": false, 00:15:58.891 "filename": "/dev/nvme0n1", 00:15:58.891 "name": "xnvme_bdev" 00:15:58.891 }, 00:15:58.891 "method": "bdev_xnvme_create" 00:15:58.891 }, 00:15:58.891 { 00:15:58.891 "method": "bdev_wait_for_examine" 00:15:58.891 } 00:15:58.891 ] 00:15:58.891 } 00:15:58.891 ] 00:15:58.891 } 00:15:58.891 10:27:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:59.151 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:59.151 fio-3.35 00:15:59.151 Starting 1 thread 00:16:05.726 00:16:05.726 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71606: Sat Dec 7 10:28:04 2024 00:16:05.726 read: IOPS=23.1k, BW=90.3MiB/s (94.7MB/s)(452MiB/5001msec) 00:16:05.726 slat (usec): min=3, max=190, avg= 8.18, stdev= 3.86 00:16:05.727 clat (usec): min=850, max=3663, avg=2441.72, stdev=299.28 00:16:05.727 lat (usec): min=857, max=3689, avg=2449.89, stdev=300.67 00:16:05.727 clat percentiles (usec): 00:16:05.727 | 1.00th=[ 1598], 5.00th=[ 1844], 10.00th=[ 2024], 20.00th=[ 2245], 00:16:05.727 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:16:05.727 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2835], 00:16:05.727 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3097], 99.95th=[ 3195], 00:16:05.727 | 99.99th=[ 3523] 00:16:05.727 bw ( KiB/s): min=87040, max=99840, per=99.54%, avg=92042.44, stdev=4076.84, samples=9 00:16:05.727 iops : min=21760, max=24960, avg=23010.56, stdev=1019.21, samples=9 00:16:05.727 lat (usec) : 1000=0.02% 00:16:05.727 lat (msec) : 2=9.30%, 4=90.68% 00:16:05.727 cpu : usr=37.98%, sys=60.32%, ctx=13, majf=0, minf=762 00:16:05.727 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:05.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.727 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:05.727 issued rwts: total=115613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:05.727 00:16:05.727 Run status group 0 (all jobs): 00:16:05.727 READ: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=452MiB (474MB), run=5001-5001msec 00:16:06.297 ----------------------------------------------------- 00:16:06.297 Suppressions used: 00:16:06.297 count bytes template 00:16:06.297 1 11 /usr/src/fio/parse.c 00:16:06.297 1 8 libtcmalloc_minimal.so 00:16:06.297 1 904 libcrypto.so 00:16:06.297 ----------------------------------------------------- 00:16:06.297 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.297 10:28:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:06.297 { 00:16:06.297 "subsystems": [ 00:16:06.297 { 00:16:06.297 "subsystem": "bdev", 00:16:06.297 "config": [ 00:16:06.297 { 00:16:06.297 "params": { 00:16:06.297 "io_mechanism": "io_uring", 00:16:06.297 "conserve_cpu": false, 00:16:06.297 "filename": "/dev/nvme0n1", 00:16:06.297 "name": "xnvme_bdev" 00:16:06.297 }, 00:16:06.297 "method": "bdev_xnvme_create" 00:16:06.297 }, 00:16:06.297 { 00:16:06.297 "method": "bdev_wait_for_examine" 00:16:06.297 } 00:16:06.297 ] 00:16:06.297 } 00:16:06.297 ] 00:16:06.297 } 00:16:06.555 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:06.555 fio-3.35 00:16:06.555 Starting 1 thread 00:16:13.237 00:16:13.237 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71703: Sat Dec 7 10:28:11 2024 00:16:13.237 write: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(451MiB/5001msec); 0 zone resets 00:16:13.237 slat (nsec): min=2770, max=93656, avg=8437.24, stdev=4010.81 00:16:13.237 clat (usec): min=1287, max=5530, avg=2433.57, stdev=331.02 00:16:13.237 lat (usec): min=1291, max=5560, avg=2442.01, stdev=332.52 00:16:13.237 clat percentiles (usec): 00:16:13.237 | 1.00th=[ 1516], 5.00th=[ 1795], 10.00th=[ 1975], 20.00th=[ 2212], 00:16:13.237 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2474], 60.00th=[ 2540], 00:16:13.237 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2868], 00:16:13.237 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3163], 99.95th=[ 4948], 00:16:13.237 | 99.99th=[ 5407] 00:16:13.237 bw ( KiB/s): min=85504, max=101736, per=100.00%, avg=92541.33, stdev=5469.10, samples=9 00:16:13.237 iops : min=21376, max=25434, avg=23135.33, stdev=1367.28, samples=9 00:16:13.237 lat (msec) : 2=10.85%, 4=89.08%, 10=0.07% 00:16:13.237 cpu : usr=39.56%, sys=58.76%, ctx=13, majf=0, minf=763 00:16:13.237 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:13.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.237 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:13.237 issued rwts: total=0,115501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.237 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:13.237 00:16:13.237 Run status group 0 (all jobs): 00:16:13.237 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=451MiB (473MB), run=5001-5001msec 00:16:13.497 ----------------------------------------------------- 00:16:13.497 Suppressions used: 00:16:13.497 count bytes template 00:16:13.497 1 11 /usr/src/fio/parse.c 00:16:13.497 1 8 libtcmalloc_minimal.so 00:16:13.497 1 904 libcrypto.so 00:16:13.497 ----------------------------------------------------- 00:16:13.497 00:16:13.497 00:16:13.497 real 0m14.688s 00:16:13.497 user 0m7.703s 00:16:13.497 sys 0m6.572s 00:16:13.497 ************************************ 00:16:13.497 END TEST xnvme_fio_plugin 00:16:13.497 ************************************ 00:16:13.497 10:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.497 10:28:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:13.497 10:28:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:13.497 10:28:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:13.497 10:28:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:13.497 10:28:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:13.497 10:28:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:13.497 10:28:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.497 10:28:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.757 ************************************ 00:16:13.757 START TEST xnvme_rpc 00:16:13.757 ************************************ 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71784 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71784 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71784 ']' 00:16:13.757 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.758 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.758 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.758 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.758 10:28:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.758 [2024-12-07 10:28:12.980909] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:13.758 [2024-12-07 10:28:12.981243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71784 ] 00:16:14.017 [2024-12-07 10:28:13.162339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.017 [2024-12-07 10:28:13.266008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.955 xnvme_bdev 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.955 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71784 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71784 ']' 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71784 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.956 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71784 00:16:15.215 killing process with pid 71784 00:16:15.215 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.215 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.215 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71784' 00:16:15.215 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71784 00:16:15.215 10:28:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71784 00:16:17.750 ************************************ 00:16:17.750 END TEST xnvme_rpc 00:16:17.750 ************************************ 00:16:17.750 00:16:17.750 real 0m3.729s 00:16:17.750 user 0m3.747s 00:16:17.750 sys 0m0.548s 00:16:17.750 10:28:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.750 10:28:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 10:28:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:17.750 10:28:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.750 10:28:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.750 10:28:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 ************************************ 00:16:17.750 START TEST xnvme_bdevperf 00:16:17.750 ************************************ 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:17.750 10:28:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 { 00:16:17.750 "subsystems": [ 00:16:17.750 { 00:16:17.750 "subsystem": "bdev", 00:16:17.750 "config": [ 00:16:17.750 { 00:16:17.750 "params": { 00:16:17.750 "io_mechanism": "io_uring", 00:16:17.750 "conserve_cpu": true, 00:16:17.750 "filename": "/dev/nvme0n1", 00:16:17.750 "name": "xnvme_bdev" 00:16:17.750 }, 00:16:17.750 "method": "bdev_xnvme_create" 00:16:17.750 }, 00:16:17.750 { 00:16:17.750 "method": "bdev_wait_for_examine" 00:16:17.750 } 00:16:17.750 ] 00:16:17.750 } 00:16:17.750 ] 00:16:17.750 } 00:16:17.750 [2024-12-07 10:28:16.764141] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:17.750 [2024-12-07 10:28:16.764255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71869 ] 00:16:17.750 [2024-12-07 10:28:16.942087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.750 [2024-12-07 10:28:17.047009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.318 Running I/O for 5 seconds... 00:16:20.191 34880.00 IOPS, 136.25 MiB/s [2024-12-07T10:28:20.480Z] 34240.00 IOPS, 133.75 MiB/s [2024-12-07T10:28:21.412Z] 35712.00 IOPS, 139.50 MiB/s [2024-12-07T10:28:22.789Z] 34848.00 IOPS, 136.12 MiB/s 00:16:23.436 Latency(us) 00:16:23.436 [2024-12-07T10:28:22.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.436 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:23.436 xnvme_bdev : 5.00 35260.39 137.74 0.00 0.00 1810.41 779.72 8317.02 00:16:23.436 [2024-12-07T10:28:22.789Z] =================================================================================================================== 00:16:23.436 [2024-12-07T10:28:22.789Z] Total : 35260.39 137.74 0.00 0.00 1810.41 779.72 8317.02 00:16:24.376 10:28:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:24.376 10:28:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:24.376 10:28:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:24.376 10:28:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:24.376 10:28:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:24.376 { 00:16:24.376 "subsystems": [ 00:16:24.376 { 00:16:24.376 "subsystem": "bdev", 00:16:24.376 "config": [ 00:16:24.376 { 00:16:24.376 "params": { 00:16:24.376 "io_mechanism": "io_uring", 00:16:24.376 "conserve_cpu": true, 00:16:24.376 "filename": "/dev/nvme0n1", 00:16:24.376 "name": "xnvme_bdev" 00:16:24.376 }, 00:16:24.376 "method": "bdev_xnvme_create" 00:16:24.376 }, 00:16:24.376 { 00:16:24.376 "method": "bdev_wait_for_examine" 00:16:24.376 } 00:16:24.376 ] 00:16:24.376 } 00:16:24.376 ] 00:16:24.376 } 00:16:24.376 [2024-12-07 10:28:23.530914] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:24.376 [2024-12-07 10:28:23.531049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71945 ] 00:16:24.376 [2024-12-07 10:28:23.713225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.635 [2024-12-07 10:28:23.821532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.895 Running I/O for 5 seconds... 00:16:27.206 26176.00 IOPS, 102.25 MiB/s [2024-12-07T10:28:27.493Z] 25824.00 IOPS, 100.88 MiB/s [2024-12-07T10:28:28.428Z] 25024.00 IOPS, 97.75 MiB/s [2024-12-07T10:28:29.364Z] 24816.00 IOPS, 96.94 MiB/s [2024-12-07T10:28:29.364Z] 24576.00 IOPS, 96.00 MiB/s 00:16:30.011 Latency(us) 00:16:30.011 [2024-12-07T10:28:29.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.011 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:30.011 xnvme_bdev : 5.01 24532.25 95.83 0.00 0.00 2600.89 1197.55 7948.54 00:16:30.011 [2024-12-07T10:28:29.364Z] =================================================================================================================== 00:16:30.011 [2024-12-07T10:28:29.364Z] Total : 24532.25 95.83 0.00 0.00 2600.89 1197.55 7948.54 00:16:30.947 00:16:30.947 real 0m13.579s 00:16:30.947 user 0m7.657s 00:16:30.947 sys 0m5.406s 00:16:30.947 10:28:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.947 10:28:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:30.947 ************************************ 00:16:30.947 END TEST xnvme_bdevperf 00:16:30.947 ************************************ 00:16:31.206 10:28:30 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:31.206 10:28:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.206 10:28:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.206 10:28:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.206 ************************************ 00:16:31.206 START TEST xnvme_fio_plugin 00:16:31.206 ************************************ 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:31.206 10:28:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:31.207 { 00:16:31.207 "subsystems": [ 00:16:31.207 { 00:16:31.207 "subsystem": "bdev", 00:16:31.207 "config": [ 00:16:31.207 { 00:16:31.207 "params": { 00:16:31.207 "io_mechanism": "io_uring", 00:16:31.207 "conserve_cpu": true, 00:16:31.207 "filename": "/dev/nvme0n1", 00:16:31.207 "name": "xnvme_bdev" 00:16:31.207 }, 00:16:31.207 "method": "bdev_xnvme_create" 00:16:31.207 }, 00:16:31.207 { 00:16:31.207 "method": "bdev_wait_for_examine" 00:16:31.207 } 00:16:31.207 ] 00:16:31.207 } 00:16:31.207 ] 00:16:31.207 } 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.207 10:28:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.466 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:31.466 fio-3.35 00:16:31.466 Starting 1 thread 00:16:38.061 00:16:38.061 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72070: Sat Dec 7 10:28:36 2024 00:16:38.061 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(442MiB/5001msec) 00:16:38.061 slat (usec): min=3, max=162, avg= 8.50, stdev= 3.74 00:16:38.061 clat (usec): min=452, max=3419, avg=2491.61, stdev=252.68 00:16:38.061 lat (usec): min=461, max=3457, avg=2500.11, stdev=253.70 00:16:38.061 clat percentiles (usec): 00:16:38.061 | 1.00th=[ 1795], 5.00th=[ 2008], 10.00th=[ 2180], 20.00th=[ 2311], 00:16:38.061 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:16:38.061 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2868], 00:16:38.061 | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3032], 99.95th=[ 3097], 00:16:38.061 | 99.99th=[ 3294] 00:16:38.061 bw ( KiB/s): min=87552, max=94720, per=100.00%, avg=90622.67, stdev=2936.99, samples=9 00:16:38.061 iops : min=21888, max=23680, avg=22655.22, stdev=734.75, samples=9 00:16:38.061 lat (usec) : 500=0.01%, 750=0.01% 00:16:38.061 lat (msec) : 2=4.69%, 4=95.30% 00:16:38.061 cpu : usr=40.84%, sys=54.00%, ctx=9, majf=0, minf=762 00:16:38.061 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:38.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.061 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:38.061 issued rwts: total=113034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:38.061 00:16:38.061 Run status group 0 (all jobs): 00:16:38.061 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=442MiB (463MB), run=5001-5001msec 00:16:38.320 ----------------------------------------------------- 00:16:38.320 Suppressions used: 00:16:38.320 count bytes template 00:16:38.320 1 11 /usr/src/fio/parse.c 00:16:38.320 1 8 libtcmalloc_minimal.so 00:16:38.320 1 904 libcrypto.so 00:16:38.320 ----------------------------------------------------- 00:16:38.320 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:38.579 10:28:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:38.579 { 00:16:38.579 "subsystems": [ 00:16:38.579 { 00:16:38.579 "subsystem": "bdev", 00:16:38.579 "config": [ 00:16:38.579 { 00:16:38.579 "params": { 00:16:38.579 "io_mechanism": "io_uring", 00:16:38.579 "conserve_cpu": true, 00:16:38.579 "filename": "/dev/nvme0n1", 00:16:38.579 "name": "xnvme_bdev" 00:16:38.579 }, 00:16:38.579 "method": "bdev_xnvme_create" 00:16:38.579 }, 00:16:38.579 { 00:16:38.579 "method": "bdev_wait_for_examine" 00:16:38.579 } 00:16:38.579 ] 00:16:38.579 } 00:16:38.579 ] 00:16:38.579 } 00:16:38.838 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:38.838 fio-3.35 00:16:38.838 Starting 1 thread 00:16:45.411 00:16:45.411 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72162: Sat Dec 7 10:28:43 2024 00:16:45.411 write: IOPS=23.0k, BW=90.0MiB/s (94.4MB/s)(450MiB/5003msec); 0 zone resets 00:16:45.411 slat (usec): min=2, max=179, avg= 8.68, stdev= 4.01 00:16:45.411 clat (usec): min=1108, max=6532, avg=2433.56, stdev=332.35 00:16:45.411 lat (usec): min=1111, max=6541, avg=2442.24, stdev=333.71 00:16:45.411 clat percentiles (usec): 00:16:45.411 | 1.00th=[ 1319], 5.00th=[ 1811], 10.00th=[ 2040], 20.00th=[ 2245], 00:16:45.411 | 30.00th=[ 2311], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:16:45.411 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2868], 00:16:45.411 | 99.00th=[ 2966], 99.50th=[ 2966], 99.90th=[ 3064], 99.95th=[ 3130], 00:16:45.411 | 99.99th=[ 4228] 00:16:45.411 bw ( KiB/s): min=87040, max=113664, per=100.00%, avg=92299.56, stdev=8230.67, samples=9 00:16:45.411 iops : min=21760, max=28416, avg=23074.89, stdev=2057.67, samples=9 00:16:45.411 lat (msec) : 2=8.67%, 4=91.31%, 10=0.02% 00:16:45.411 cpu : usr=43.48%, sys=51.60%, ctx=11, majf=0, minf=763 00:16:45.411 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:45.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.411 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:45.411 issued rwts: total=0,115293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.411 00:16:45.411 Run status group 0 (all jobs): 00:16:45.411 WRITE: bw=90.0MiB/s (94.4MB/s), 90.0MiB/s-90.0MiB/s (94.4MB/s-94.4MB/s), io=450MiB (472MB), run=5003-5003msec 00:16:45.672 ----------------------------------------------------- 00:16:45.672 Suppressions used: 00:16:45.672 count bytes template 00:16:45.672 1 11 /usr/src/fio/parse.c 00:16:45.672 1 8 libtcmalloc_minimal.so 00:16:45.672 1 904 libcrypto.so 00:16:45.672 ----------------------------------------------------- 00:16:45.672 00:16:45.941 ************************************ 00:16:45.941 END TEST xnvme_fio_plugin 00:16:45.941 ************************************ 00:16:45.941 00:16:45.941 real 0m14.703s 00:16:45.941 user 0m8.042s 00:16:45.941 sys 0m5.912s 00:16:45.941 10:28:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.941 10:28:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:45.941 10:28:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:45.941 10:28:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.941 10:28:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.941 10:28:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.941 ************************************ 00:16:45.941 START TEST xnvme_rpc 00:16:45.941 ************************************ 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72248 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72248 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72248 ']' 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.941 10:28:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.941 [2024-12-07 10:28:45.229774] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:45.942 [2024-12-07 10:28:45.230653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72248 ] 00:16:46.202 [2024-12-07 10:28:45.408593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.202 [2024-12-07 10:28:45.515625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 xnvme_bdev 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72248 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72248 ']' 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72248 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72248 00:16:47.434 killing process with pid 72248 00:16:47.434 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.435 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.435 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72248' 00:16:47.435 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72248 00:16:47.435 10:28:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72248 00:16:49.983 ************************************ 00:16:49.983 END TEST xnvme_rpc 00:16:49.983 ************************************ 00:16:49.983 00:16:49.983 real 0m3.734s 00:16:49.983 user 0m3.777s 00:16:49.983 sys 0m0.542s 00:16:49.983 10:28:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.983 10:28:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.983 10:28:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:49.983 10:28:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.983 10:28:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.983 10:28:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:49.983 ************************************ 00:16:49.983 START TEST xnvme_bdevperf 00:16:49.983 ************************************ 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:49.983 10:28:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:49.983 { 00:16:49.983 "subsystems": [ 00:16:49.983 { 00:16:49.983 "subsystem": "bdev", 00:16:49.983 "config": [ 00:16:49.983 { 00:16:49.983 "params": { 00:16:49.983 "io_mechanism": "io_uring_cmd", 00:16:49.983 "conserve_cpu": false, 00:16:49.983 "filename": "/dev/ng0n1", 00:16:49.983 "name": "xnvme_bdev" 00:16:49.983 }, 00:16:49.983 "method": "bdev_xnvme_create" 00:16:49.983 }, 00:16:49.983 { 00:16:49.983 "method": "bdev_wait_for_examine" 00:16:49.983 } 00:16:49.983 ] 00:16:49.983 } 00:16:49.983 ] 00:16:49.983 } 00:16:49.983 [2024-12-07 10:28:49.035322] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:49.983 [2024-12-07 10:28:49.035453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72329 ] 00:16:49.983 [2024-12-07 10:28:49.222902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.984 [2024-12-07 10:28:49.326084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.550 Running I/O for 5 seconds... 00:16:52.436 36288.00 IOPS, 141.75 MiB/s [2024-12-07T10:28:52.740Z] 30976.00 IOPS, 121.00 MiB/s [2024-12-07T10:28:53.675Z] 28864.00 IOPS, 112.75 MiB/s [2024-12-07T10:28:55.057Z] 28656.00 IOPS, 111.94 MiB/s [2024-12-07T10:28:55.057Z] 28108.80 IOPS, 109.80 MiB/s 00:16:55.704 Latency(us) 00:16:55.704 [2024-12-07T10:28:55.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.704 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:55.704 xnvme_bdev : 5.01 28058.61 109.60 0.00 0.00 2274.07 947.51 7948.54 00:16:55.704 [2024-12-07T10:28:55.057Z] =================================================================================================================== 00:16:55.704 [2024-12-07T10:28:55.057Z] Total : 28058.61 109.60 0.00 0.00 2274.07 947.51 7948.54 00:16:56.645 10:28:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:56.645 10:28:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:56.645 10:28:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:56.645 10:28:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:56.645 10:28:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:56.645 { 00:16:56.645 "subsystems": [ 00:16:56.645 { 00:16:56.645 "subsystem": "bdev", 00:16:56.645 "config": [ 00:16:56.645 { 00:16:56.645 "params": { 00:16:56.645 "io_mechanism": "io_uring_cmd", 00:16:56.645 "conserve_cpu": false, 00:16:56.645 "filename": "/dev/ng0n1", 00:16:56.645 "name": "xnvme_bdev" 00:16:56.645 }, 00:16:56.645 "method": "bdev_xnvme_create" 00:16:56.645 }, 00:16:56.645 { 00:16:56.645 "method": "bdev_wait_for_examine" 00:16:56.645 } 00:16:56.645 ] 00:16:56.645 } 00:16:56.645 ] 00:16:56.645 } 00:16:56.645 [2024-12-07 10:28:55.953467] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:56.645 [2024-12-07 10:28:55.954286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72409 ] 00:16:56.904 [2024-12-07 10:28:56.130906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.163 [2024-12-07 10:28:56.262454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.422 Running I/O for 5 seconds... 00:16:59.743 29184.00 IOPS, 114.00 MiB/s [2024-12-07T10:29:00.037Z] 26880.00 IOPS, 105.00 MiB/s [2024-12-07T10:29:00.976Z] 25578.67 IOPS, 99.92 MiB/s [2024-12-07T10:29:01.915Z] 25888.00 IOPS, 101.12 MiB/s 00:17:02.562 Latency(us) 00:17:02.562 [2024-12-07T10:29:01.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.562 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:02.562 xnvme_bdev : 5.00 26088.38 101.91 0.00 0.00 2445.33 980.41 8001.18 00:17:02.562 [2024-12-07T10:29:01.915Z] =================================================================================================================== 00:17:02.562 [2024-12-07T10:29:01.915Z] Total : 26088.38 101.91 0.00 0.00 2445.33 980.41 8001.18 00:17:03.945 10:29:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:03.945 10:29:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:03.945 10:29:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:03.945 10:29:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:03.945 10:29:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:03.945 { 00:17:03.945 "subsystems": [ 00:17:03.945 { 00:17:03.945 "subsystem": "bdev", 00:17:03.945 "config": [ 00:17:03.945 { 00:17:03.945 "params": { 00:17:03.945 "io_mechanism": "io_uring_cmd", 00:17:03.945 "conserve_cpu": false, 00:17:03.945 "filename": "/dev/ng0n1", 00:17:03.945 "name": "xnvme_bdev" 00:17:03.945 }, 00:17:03.945 "method": "bdev_xnvme_create" 00:17:03.945 }, 00:17:03.945 { 00:17:03.945 "method": "bdev_wait_for_examine" 00:17:03.945 } 00:17:03.945 ] 00:17:03.945 } 00:17:03.945 ] 00:17:03.945 } 00:17:03.945 [2024-12-07 10:29:02.974285] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:03.945 [2024-12-07 10:29:02.974620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72494 ] 00:17:03.945 [2024-12-07 10:29:03.161593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.945 [2024-12-07 10:29:03.294876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.514 Running I/O for 5 seconds... 00:17:06.391 62144.00 IOPS, 242.75 MiB/s [2024-12-07T10:29:07.122Z] 61344.00 IOPS, 239.62 MiB/s [2024-12-07T10:29:07.688Z] 60309.33 IOPS, 235.58 MiB/s [2024-12-07T10:29:09.064Z] 60976.00 IOPS, 238.19 MiB/s 00:17:09.711 Latency(us) 00:17:09.711 [2024-12-07T10:29:09.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.711 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:09.711 xnvme_bdev : 5.00 60680.89 237.03 0.00 0.00 1051.20 687.60 4211.15 00:17:09.711 [2024-12-07T10:29:09.064Z] =================================================================================================================== 00:17:09.711 [2024-12-07T10:29:09.064Z] Total : 60680.89 237.03 0.00 0.00 1051.20 687.60 4211.15 00:17:10.648 10:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:10.648 10:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:10.648 10:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:10.648 10:29:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:10.648 10:29:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:10.648 { 00:17:10.648 "subsystems": [ 00:17:10.648 { 00:17:10.648 "subsystem": "bdev", 00:17:10.648 "config": [ 00:17:10.648 { 00:17:10.648 "params": { 00:17:10.648 "io_mechanism": "io_uring_cmd", 00:17:10.648 "conserve_cpu": false, 00:17:10.648 "filename": "/dev/ng0n1", 00:17:10.648 "name": "xnvme_bdev" 00:17:10.648 }, 00:17:10.648 "method": "bdev_xnvme_create" 00:17:10.648 }, 00:17:10.648 { 00:17:10.648 "method": "bdev_wait_for_examine" 00:17:10.648 } 00:17:10.648 ] 00:17:10.648 } 00:17:10.648 ] 00:17:10.648 } 00:17:10.648 [2024-12-07 10:29:09.973359] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:10.648 [2024-12-07 10:29:09.973662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72570 ] 00:17:10.907 [2024-12-07 10:29:10.157603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.166 [2024-12-07 10:29:10.287322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.425 Running I/O for 5 seconds... 00:17:13.733 9039.00 IOPS, 35.31 MiB/s [2024-12-07T10:29:14.019Z] 36091.50 IOPS, 140.98 MiB/s [2024-12-07T10:29:14.953Z] 47214.00 IOPS, 184.43 MiB/s [2024-12-07T10:29:15.970Z] 52840.25 IOPS, 206.41 MiB/s [2024-12-07T10:29:15.970Z] 55904.80 IOPS, 218.38 MiB/s 00:17:16.617 Latency(us) 00:17:16.617 [2024-12-07T10:29:15.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.617 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:16.617 xnvme_bdev : 5.00 55888.17 218.31 0.00 0.00 1142.66 96.23 31794.17 00:17:16.617 [2024-12-07T10:29:15.970Z] =================================================================================================================== 00:17:16.617 [2024-12-07T10:29:15.970Z] Total : 55888.17 218.31 0.00 0.00 1142.66 96.23 31794.17 00:17:17.554 00:17:17.554 real 0m27.923s 00:17:17.554 user 0m14.475s 00:17:17.554 sys 0m13.002s 00:17:17.554 ************************************ 00:17:17.554 END TEST xnvme_bdevperf 00:17:17.554 ************************************ 00:17:17.554 10:29:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.554 10:29:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:17.814 10:29:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:17.814 10:29:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.814 10:29:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.814 10:29:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.814 ************************************ 00:17:17.814 START TEST xnvme_fio_plugin 00:17:17.814 ************************************ 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:17.815 10:29:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.815 { 00:17:17.815 "subsystems": [ 00:17:17.815 { 00:17:17.815 "subsystem": "bdev", 00:17:17.815 "config": [ 00:17:17.815 { 00:17:17.815 "params": { 00:17:17.815 "io_mechanism": "io_uring_cmd", 00:17:17.815 "conserve_cpu": false, 00:17:17.815 "filename": "/dev/ng0n1", 00:17:17.815 "name": "xnvme_bdev" 00:17:17.815 }, 00:17:17.815 "method": "bdev_xnvme_create" 00:17:17.815 }, 00:17:17.815 { 00:17:17.815 "method": "bdev_wait_for_examine" 00:17:17.815 } 00:17:17.815 ] 00:17:17.815 } 00:17:17.815 ] 00:17:17.815 } 00:17:18.075 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:18.075 fio-3.35 00:17:18.075 Starting 1 thread 00:17:24.648 00:17:24.648 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72698: Sat Dec 7 10:29:23 2024 00:17:24.648 read: IOPS=23.7k, BW=92.5MiB/s (97.0MB/s)(463MiB/5001msec) 00:17:24.648 slat (usec): min=2, max=174, avg= 8.23, stdev= 3.82 00:17:24.648 clat (usec): min=1034, max=3360, avg=2369.12, stdev=309.21 00:17:24.648 lat (usec): min=1036, max=3387, avg=2377.36, stdev=310.56 00:17:24.648 clat percentiles (usec): 00:17:24.648 | 1.00th=[ 1287], 5.00th=[ 1778], 10.00th=[ 1991], 20.00th=[ 2180], 00:17:24.648 | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2409], 60.00th=[ 2474], 00:17:24.648 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2704], 95.00th=[ 2769], 00:17:24.648 | 99.00th=[ 2835], 99.50th=[ 2868], 99.90th=[ 3032], 99.95th=[ 3130], 00:17:24.648 | 99.99th=[ 3294] 00:17:24.648 bw ( KiB/s): min=90112, max=106496, per=100.00%, avg=94947.56, stdev=4803.74, samples=9 00:17:24.648 iops : min=22528, max=26624, avg=23736.89, stdev=1200.94, samples=9 00:17:24.648 lat (msec) : 2=10.32%, 4=89.68% 00:17:24.648 cpu : usr=39.74%, sys=58.54%, ctx=21, majf=0, minf=762 00:17:24.648 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:24.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.648 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:24.648 issued rwts: total=118464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.648 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:24.648 00:17:24.648 Run status group 0 (all jobs): 00:17:24.648 READ: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=463MiB (485MB), run=5001-5001msec 00:17:25.217 ----------------------------------------------------- 00:17:25.217 Suppressions used: 00:17:25.217 count bytes template 00:17:25.217 1 11 /usr/src/fio/parse.c 00:17:25.217 1 8 libtcmalloc_minimal.so 00:17:25.217 1 904 libcrypto.so 00:17:25.217 ----------------------------------------------------- 00:17:25.217 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:25.217 10:29:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:25.218 { 00:17:25.218 "subsystems": [ 00:17:25.218 { 00:17:25.218 "subsystem": "bdev", 00:17:25.218 "config": [ 00:17:25.218 { 00:17:25.218 "params": { 00:17:25.218 "io_mechanism": "io_uring_cmd", 00:17:25.218 "conserve_cpu": false, 00:17:25.218 "filename": "/dev/ng0n1", 00:17:25.218 "name": "xnvme_bdev" 00:17:25.218 }, 00:17:25.218 "method": "bdev_xnvme_create" 00:17:25.218 }, 00:17:25.218 { 00:17:25.218 "method": "bdev_wait_for_examine" 00:17:25.218 } 00:17:25.218 ] 00:17:25.218 } 00:17:25.218 ] 00:17:25.218 } 00:17:25.477 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:25.477 fio-3.35 00:17:25.477 Starting 1 thread 00:17:32.044 00:17:32.044 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72790: Sat Dec 7 10:29:30 2024 00:17:32.044 write: IOPS=28.3k, BW=111MiB/s (116MB/s)(553MiB/5002msec); 0 zone resets 00:17:32.044 slat (usec): min=2, max=283, avg= 6.95, stdev= 4.31 00:17:32.044 clat (usec): min=354, max=4065, avg=1992.54, stdev=508.67 00:17:32.044 lat (usec): min=359, max=4072, avg=1999.49, stdev=510.72 00:17:32.044 clat percentiles (usec): 00:17:32.044 | 1.00th=[ 824], 5.00th=[ 1156], 10.00th=[ 1270], 20.00th=[ 1467], 00:17:32.044 | 30.00th=[ 1696], 40.00th=[ 1893], 50.00th=[ 2073], 60.00th=[ 2212], 00:17:32.044 | 70.00th=[ 2311], 80.00th=[ 2474], 90.00th=[ 2638], 95.00th=[ 2737], 00:17:32.044 | 99.00th=[ 2835], 99.50th=[ 2933], 99.90th=[ 3425], 99.95th=[ 3556], 00:17:32.044 | 99.99th=[ 3916] 00:17:32.044 bw ( KiB/s): min=97792, max=125952, per=100.00%, avg=113355.11, stdev=10024.97, samples=9 00:17:32.044 iops : min=24448, max=31488, avg=28338.78, stdev=2506.24, samples=9 00:17:32.044 lat (usec) : 500=0.01%, 750=0.64%, 1000=1.31% 00:17:32.044 lat (msec) : 2=43.92%, 4=54.12%, 10=0.01% 00:17:32.044 cpu : usr=38.09%, sys=59.89%, ctx=14, majf=0, minf=763 00:17:32.044 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.1%, 16=24.2%, 32=51.6%, >=64=1.6% 00:17:32.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.044 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:32.044 issued rwts: total=0,141508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:32.044 00:17:32.044 Run status group 0 (all jobs): 00:17:32.044 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=553MiB (580MB), run=5002-5002msec 00:17:32.611 ----------------------------------------------------- 00:17:32.612 Suppressions used: 00:17:32.612 count bytes template 00:17:32.612 1 11 /usr/src/fio/parse.c 00:17:32.612 1 8 libtcmalloc_minimal.so 00:17:32.612 1 904 libcrypto.so 00:17:32.612 ----------------------------------------------------- 00:17:32.612 00:17:32.870 00:17:32.870 real 0m15.069s 00:17:32.870 user 0m7.936s 00:17:32.870 sys 0m6.691s 00:17:32.870 10:29:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.870 ************************************ 00:17:32.870 END TEST xnvme_fio_plugin 00:17:32.870 ************************************ 00:17:32.870 10:29:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:32.870 10:29:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:32.870 10:29:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:32.870 10:29:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:32.870 10:29:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:32.870 10:29:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:32.870 10:29:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.870 10:29:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.870 ************************************ 00:17:32.870 START TEST xnvme_rpc 00:17:32.870 ************************************ 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72881 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72881 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72881 ']' 00:17:32.870 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.871 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.871 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.871 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.871 10:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.871 [2024-12-07 10:29:32.202930] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:32.871 [2024-12-07 10:29:32.203120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72881 ] 00:17:33.129 [2024-12-07 10:29:32.386817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.387 [2024-12-07 10:29:32.523581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.325 xnvme_bdev 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.325 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72881 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72881 ']' 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72881 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72881 00:17:34.585 killing process with pid 72881 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72881' 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72881 00:17:34.585 10:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72881 00:17:37.135 ************************************ 00:17:37.135 END TEST xnvme_rpc 00:17:37.135 ************************************ 00:17:37.135 00:17:37.135 real 0m4.183s 00:17:37.135 user 0m4.036s 00:17:37.135 sys 0m0.753s 00:17:37.135 10:29:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.135 10:29:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.135 10:29:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:37.136 10:29:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.136 10:29:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.136 10:29:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 ************************************ 00:17:37.136 START TEST xnvme_bdevperf 00:17:37.136 ************************************ 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:37.136 10:29:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 { 00:17:37.136 "subsystems": [ 00:17:37.136 { 00:17:37.136 "subsystem": "bdev", 00:17:37.136 "config": [ 00:17:37.136 { 00:17:37.136 "params": { 00:17:37.136 "io_mechanism": "io_uring_cmd", 00:17:37.136 "conserve_cpu": true, 00:17:37.136 "filename": "/dev/ng0n1", 00:17:37.136 "name": "xnvme_bdev" 00:17:37.136 }, 00:17:37.136 "method": "bdev_xnvme_create" 00:17:37.136 }, 00:17:37.136 { 00:17:37.136 "method": "bdev_wait_for_examine" 00:17:37.136 } 00:17:37.136 ] 00:17:37.136 } 00:17:37.136 ] 00:17:37.136 } 00:17:37.136 [2024-12-07 10:29:36.447095] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:37.136 [2024-12-07 10:29:36.447410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72970 ] 00:17:37.396 [2024-12-07 10:29:36.633883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.656 [2024-12-07 10:29:36.769191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.916 Running I/O for 5 seconds... 00:17:40.243 25088.00 IOPS, 98.00 MiB/s [2024-12-07T10:29:40.536Z] 23872.00 IOPS, 93.25 MiB/s [2024-12-07T10:29:41.477Z] 23936.00 IOPS, 93.50 MiB/s [2024-12-07T10:29:42.418Z] 24160.00 IOPS, 94.38 MiB/s 00:17:43.065 Latency(us) 00:17:43.065 [2024-12-07T10:29:42.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.065 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:43.065 xnvme_bdev : 5.01 23980.31 93.67 0.00 0.00 2660.29 940.93 8159.10 00:17:43.065 [2024-12-07T10:29:42.418Z] =================================================================================================================== 00:17:43.065 [2024-12-07T10:29:42.418Z] Total : 23980.31 93.67 0.00 0.00 2660.29 940.93 8159.10 00:17:44.446 10:29:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:44.446 10:29:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:44.446 10:29:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:44.446 10:29:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:44.446 10:29:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:44.446 { 00:17:44.446 "subsystems": [ 00:17:44.446 { 00:17:44.446 "subsystem": "bdev", 00:17:44.446 "config": [ 00:17:44.446 { 00:17:44.446 "params": { 00:17:44.446 "io_mechanism": "io_uring_cmd", 00:17:44.446 "conserve_cpu": true, 00:17:44.446 "filename": "/dev/ng0n1", 00:17:44.446 "name": "xnvme_bdev" 00:17:44.446 }, 00:17:44.446 "method": "bdev_xnvme_create" 00:17:44.446 }, 00:17:44.446 { 00:17:44.446 "method": "bdev_wait_for_examine" 00:17:44.446 } 00:17:44.446 ] 00:17:44.446 } 00:17:44.446 ] 00:17:44.446 } 00:17:44.446 [2024-12-07 10:29:43.486785] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:44.446 [2024-12-07 10:29:43.486924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73052 ] 00:17:44.446 [2024-12-07 10:29:43.669770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.705 [2024-12-07 10:29:43.810978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.964 Running I/O for 5 seconds... 00:17:47.282 28864.00 IOPS, 112.75 MiB/s [2024-12-07T10:29:47.571Z] 26592.00 IOPS, 103.88 MiB/s [2024-12-07T10:29:48.506Z] 26133.33 IOPS, 102.08 MiB/s [2024-12-07T10:29:49.441Z] 26112.00 IOPS, 102.00 MiB/s 00:17:50.088 Latency(us) 00:17:50.088 [2024-12-07T10:29:49.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.088 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:50.088 xnvme_bdev : 5.01 25720.35 100.47 0.00 0.00 2480.27 789.59 7527.43 00:17:50.088 [2024-12-07T10:29:49.441Z] =================================================================================================================== 00:17:50.088 [2024-12-07T10:29:49.441Z] Total : 25720.35 100.47 0.00 0.00 2480.27 789.59 7527.43 00:17:51.476 10:29:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:51.476 10:29:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:51.476 10:29:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:51.476 10:29:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:51.476 10:29:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:51.476 { 00:17:51.476 "subsystems": [ 00:17:51.476 { 00:17:51.476 "subsystem": "bdev", 00:17:51.476 "config": [ 00:17:51.476 { 00:17:51.476 "params": { 00:17:51.476 "io_mechanism": "io_uring_cmd", 00:17:51.476 "conserve_cpu": true, 00:17:51.476 "filename": "/dev/ng0n1", 00:17:51.476 "name": "xnvme_bdev" 00:17:51.476 }, 00:17:51.476 "method": "bdev_xnvme_create" 00:17:51.476 }, 00:17:51.476 { 00:17:51.476 "method": "bdev_wait_for_examine" 00:17:51.476 } 00:17:51.476 ] 00:17:51.476 } 00:17:51.476 ] 00:17:51.476 } 00:17:51.476 [2024-12-07 10:29:50.544456] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:51.476 [2024-12-07 10:29:50.544586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73133 ] 00:17:51.476 [2024-12-07 10:29:50.728834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.744 [2024-12-07 10:29:50.863070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.030 Running I/O for 5 seconds... 00:17:53.921 70464.00 IOPS, 275.25 MiB/s [2024-12-07T10:29:54.653Z] 70048.00 IOPS, 273.62 MiB/s [2024-12-07T10:29:55.596Z] 70890.67 IOPS, 276.92 MiB/s [2024-12-07T10:29:56.533Z] 71296.00 IOPS, 278.50 MiB/s [2024-12-07T10:29:56.533Z] 71513.60 IOPS, 279.35 MiB/s 00:17:57.180 Latency(us) 00:17:57.180 [2024-12-07T10:29:56.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.180 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:57.180 xnvme_bdev : 5.00 71505.11 279.32 0.00 0.00 892.34 651.41 2513.53 00:17:57.180 [2024-12-07T10:29:56.533Z] =================================================================================================================== 00:17:57.180 [2024-12-07T10:29:56.533Z] Total : 71505.11 279.32 0.00 0.00 892.34 651.41 2513.53 00:17:58.119 10:29:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:58.119 10:29:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:58.119 10:29:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:58.119 10:29:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:58.119 10:29:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:58.379 { 00:17:58.379 "subsystems": [ 00:17:58.379 { 00:17:58.379 "subsystem": "bdev", 00:17:58.379 "config": [ 00:17:58.379 { 00:17:58.379 "params": { 00:17:58.379 "io_mechanism": "io_uring_cmd", 00:17:58.379 "conserve_cpu": true, 00:17:58.379 "filename": "/dev/ng0n1", 00:17:58.379 "name": "xnvme_bdev" 00:17:58.379 }, 00:17:58.379 "method": "bdev_xnvme_create" 00:17:58.379 }, 00:17:58.379 { 00:17:58.379 "method": "bdev_wait_for_examine" 00:17:58.379 } 00:17:58.379 ] 00:17:58.379 } 00:17:58.379 ] 00:17:58.379 } 00:17:58.379 [2024-12-07 10:29:57.523089] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:58.379 [2024-12-07 10:29:57.523437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73208 ] 00:17:58.379 [2024-12-07 10:29:57.712210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.641 [2024-12-07 10:29:57.845707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.901 Running I/O for 5 seconds... 00:18:01.207 59274.00 IOPS, 231.54 MiB/s [2024-12-07T10:30:01.491Z] 58931.00 IOPS, 230.20 MiB/s [2024-12-07T10:30:02.425Z] 59659.00 IOPS, 233.04 MiB/s [2024-12-07T10:30:03.362Z] 56162.25 IOPS, 219.38 MiB/s 00:18:04.009 Latency(us) 00:18:04.009 [2024-12-07T10:30:03.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.009 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:04.009 xnvme_bdev : 5.00 52278.59 204.21 0.00 0.00 1219.12 51.41 23056.04 00:18:04.009 [2024-12-07T10:30:03.362Z] =================================================================================================================== 00:18:04.009 [2024-12-07T10:30:03.362Z] Total : 52278.59 204.21 0.00 0.00 1219.12 51.41 23056.04 00:18:05.389 00:18:05.389 real 0m28.106s 00:18:05.389 user 0m17.145s 00:18:05.389 sys 0m8.754s 00:18:05.389 10:30:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.389 10:30:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:05.389 ************************************ 00:18:05.389 END TEST xnvme_bdevperf 00:18:05.389 ************************************ 00:18:05.389 10:30:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:05.389 10:30:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.389 10:30:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.389 10:30:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:05.389 ************************************ 00:18:05.389 START TEST xnvme_fio_plugin 00:18:05.389 ************************************ 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:05.389 { 00:18:05.389 "subsystems": [ 00:18:05.389 { 00:18:05.389 "subsystem": "bdev", 00:18:05.389 "config": [ 00:18:05.389 { 00:18:05.389 "params": { 00:18:05.389 "io_mechanism": "io_uring_cmd", 00:18:05.389 "conserve_cpu": true, 00:18:05.389 "filename": "/dev/ng0n1", 00:18:05.389 "name": "xnvme_bdev" 00:18:05.389 }, 00:18:05.389 "method": "bdev_xnvme_create" 00:18:05.389 }, 00:18:05.389 { 00:18:05.389 "method": "bdev_wait_for_examine" 00:18:05.389 } 00:18:05.389 ] 00:18:05.389 } 00:18:05.389 ] 00:18:05.389 } 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:05.389 10:30:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:05.649 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:05.649 fio-3.35 00:18:05.649 Starting 1 thread 00:18:12.206 00:18:12.206 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73336: Sat Dec 7 10:30:10 2024 00:18:12.206 read: IOPS=38.1k, BW=149MiB/s (156MB/s)(744MiB/5001msec) 00:18:12.206 slat (usec): min=2, max=126, avg= 4.90, stdev= 2.23 00:18:12.206 clat (usec): min=112, max=38979, avg=1488.51, stdev=564.02 00:18:12.206 lat (usec): min=115, max=38985, avg=1493.40, stdev=565.36 00:18:12.206 clat percentiles (usec): 00:18:12.206 | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 922], 00:18:12.206 | 30.00th=[ 1057], 40.00th=[ 1434], 50.00th=[ 1549], 60.00th=[ 1647], 00:18:12.206 | 70.00th=[ 1762], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:18:12.206 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 5014], 99.95th=[ 8848], 00:18:12.206 | 99.99th=[16909] 00:18:12.206 bw ( KiB/s): min=113128, max=254440, per=100.00%, avg=152552.89, stdev=49025.61, samples=9 00:18:12.206 iops : min=28282, max=63610, avg=38138.22, stdev=12256.40, samples=9 00:18:12.206 lat (usec) : 250=0.04%, 500=0.18%, 750=0.83%, 1000=26.80% 00:18:12.206 lat (msec) : 2=58.04%, 4=14.00%, 10=0.08%, 20=0.04%, 50=0.01% 00:18:12.206 cpu : usr=45.56%, sys=51.74%, ctx=15, majf=0, minf=762 00:18:12.206 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.1%, >=64=1.7% 00:18:12.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.206 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:12.206 issued rwts: total=190566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:12.206 00:18:12.206 Run status group 0 (all jobs): 00:18:12.206 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=744MiB (781MB), run=5001-5001msec 00:18:12.773 ----------------------------------------------------- 00:18:12.773 Suppressions used: 00:18:12.773 count bytes template 00:18:12.773 1 11 /usr/src/fio/parse.c 00:18:12.773 1 8 libtcmalloc_minimal.so 00:18:12.773 1 904 libcrypto.so 00:18:12.773 ----------------------------------------------------- 00:18:12.773 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.773 10:30:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:13.031 { 00:18:13.031 "subsystems": [ 00:18:13.031 { 00:18:13.031 "subsystem": "bdev", 00:18:13.031 "config": [ 00:18:13.031 { 00:18:13.031 "params": { 00:18:13.031 "io_mechanism": "io_uring_cmd", 00:18:13.031 "conserve_cpu": true, 00:18:13.031 "filename": "/dev/ng0n1", 00:18:13.031 "name": "xnvme_bdev" 00:18:13.031 }, 00:18:13.031 "method": "bdev_xnvme_create" 00:18:13.031 }, 00:18:13.031 { 00:18:13.031 "method": "bdev_wait_for_examine" 00:18:13.031 } 00:18:13.031 ] 00:18:13.031 } 00:18:13.031 ] 00:18:13.031 } 00:18:13.031 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:13.031 fio-3.35 00:18:13.031 Starting 1 thread 00:18:19.600 00:18:19.600 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73434: Sat Dec 7 10:30:18 2024 00:18:19.600 write: IOPS=29.8k, BW=117MiB/s (122MB/s)(583MiB/5002msec); 0 zone resets 00:18:19.600 slat (usec): min=2, max=178, avg= 6.39, stdev= 3.30 00:18:19.600 clat (usec): min=209, max=4406, avg=1894.30, stdev=483.15 00:18:19.600 lat (usec): min=213, max=4413, avg=1900.69, stdev=485.06 00:18:19.600 clat percentiles (usec): 00:18:19.600 | 1.00th=[ 1074], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1401], 00:18:19.600 | 30.00th=[ 1532], 40.00th=[ 1696], 50.00th=[ 1876], 60.00th=[ 2040], 00:18:19.600 | 70.00th=[ 2212], 80.00th=[ 2376], 90.00th=[ 2573], 95.00th=[ 2704], 00:18:19.600 | 99.00th=[ 2868], 99.50th=[ 2900], 99.90th=[ 2966], 99.95th=[ 2999], 00:18:19.600 | 99.99th=[ 3064] 00:18:19.600 bw ( KiB/s): min=101344, max=148960, per=98.17%, avg=117209.78, stdev=15723.80, samples=9 00:18:19.600 iops : min=25336, max=37240, avg=29302.44, stdev=3930.95, samples=9 00:18:19.600 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.17% 00:18:19.600 lat (msec) : 2=56.90%, 4=42.91%, 10=0.01% 00:18:19.600 cpu : usr=49.45%, sys=45.69%, ctx=12, majf=0, minf=763 00:18:19.600 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:19.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.600 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:19.600 issued rwts: total=0,149302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:19.600 00:18:19.600 Run status group 0 (all jobs): 00:18:19.600 WRITE: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=583MiB (612MB), run=5002-5002msec 00:18:20.539 ----------------------------------------------------- 00:18:20.539 Suppressions used: 00:18:20.539 count bytes template 00:18:20.539 1 11 /usr/src/fio/parse.c 00:18:20.539 1 8 libtcmalloc_minimal.so 00:18:20.539 1 904 libcrypto.so 00:18:20.539 ----------------------------------------------------- 00:18:20.539 00:18:20.539 00:18:20.539 real 0m15.060s 00:18:20.539 user 0m8.620s 00:18:20.539 sys 0m5.792s 00:18:20.539 ************************************ 00:18:20.539 END TEST xnvme_fio_plugin 00:18:20.539 ************************************ 00:18:20.539 10:30:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.539 10:30:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:20.539 Process with pid 72881 is not found 00:18:20.539 10:30:19 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72881 00:18:20.539 10:30:19 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72881 ']' 00:18:20.539 10:30:19 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72881 00:18:20.539 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72881) - No such process 00:18:20.539 10:30:19 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72881 is not found' 00:18:20.539 00:18:20.539 real 3m52.671s 00:18:20.539 user 2m4.836s 00:18:20.539 sys 1m30.469s 00:18:20.539 10:30:19 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.540 ************************************ 00:18:20.540 END TEST nvme_xnvme 00:18:20.540 ************************************ 00:18:20.540 10:30:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.540 10:30:19 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:20.540 10:30:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:20.540 10:30:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.540 10:30:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.540 ************************************ 00:18:20.540 START TEST blockdev_xnvme 00:18:20.540 ************************************ 00:18:20.540 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:20.540 * Looking for test storage... 00:18:20.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:20.540 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.540 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.540 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.800 10:30:19 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.800 --rc genhtml_branch_coverage=1 00:18:20.800 --rc genhtml_function_coverage=1 00:18:20.800 --rc genhtml_legend=1 00:18:20.800 --rc geninfo_all_blocks=1 00:18:20.800 --rc geninfo_unexecuted_blocks=1 00:18:20.800 00:18:20.800 ' 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.800 --rc genhtml_branch_coverage=1 00:18:20.800 --rc genhtml_function_coverage=1 00:18:20.800 --rc genhtml_legend=1 00:18:20.800 --rc geninfo_all_blocks=1 00:18:20.800 --rc geninfo_unexecuted_blocks=1 00:18:20.800 00:18:20.800 ' 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.800 --rc genhtml_branch_coverage=1 00:18:20.800 --rc genhtml_function_coverage=1 00:18:20.800 --rc genhtml_legend=1 00:18:20.800 --rc geninfo_all_blocks=1 00:18:20.800 --rc geninfo_unexecuted_blocks=1 00:18:20.800 00:18:20.800 ' 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.800 --rc genhtml_branch_coverage=1 00:18:20.800 --rc genhtml_function_coverage=1 00:18:20.800 --rc genhtml_legend=1 00:18:20.800 --rc geninfo_all_blocks=1 00:18:20.800 --rc geninfo_unexecuted_blocks=1 00:18:20.800 00:18:20.800 ' 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73569 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:20.800 10:30:19 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73569 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73569 ']' 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.800 10:30:19 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.801 10:30:19 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.801 10:30:19 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.801 10:30:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.801 [2024-12-07 10:30:20.097290] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:20.801 [2024-12-07 10:30:20.097447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73569 ] 00:18:21.060 [2024-12-07 10:30:20.284687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.320 [2024-12-07 10:30:20.425070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.260 10:30:21 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.260 10:30:21 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:18:22.260 10:30:21 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:22.260 10:30:21 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:18:22.260 10:30:21 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:22.260 10:30:21 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:22.260 10:30:21 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:22.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.770 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:23.770 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:23.770 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:23.770 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:23.770 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.770 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:18:23.771 nvme0n1 00:18:23.771 nvme0n2 00:18:23.771 nvme0n3 00:18:23.771 nvme1n1 00:18:23.771 nvme2n1 00:18:23.771 nvme3n1 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.771 10:30:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.771 10:30:22 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:18:23.771 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.771 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.771 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.771 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:23.771 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.771 10:30:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.771 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e7613b96-e2da-4c55-a222-a24e7315486f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e7613b96-e2da-4c55-a222-a24e7315486f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "327e0831-b5ec-4dc7-a44e-f56b52ab926f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "327e0831-b5ec-4dc7-a44e-f56b52ab926f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "673f26d2-d61d-42df-99ba-09fc6d6ed759"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "673f26d2-d61d-42df-99ba-09fc6d6ed759",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0bd6e783-70b9-43c6-9528-5d95680e2d96"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0bd6e783-70b9-43c6-9528-5d95680e2d96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "af9370d7-1df0-41f1-9854-0c18274fafe9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "af9370d7-1df0-41f1-9854-0c18274fafe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c43d630c-e677-4f0a-bb13-43c0e10a9ecb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c43d630c-e677-4f0a-bb13-43c0e10a9ecb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:24.031 10:30:23 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73569 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73569 ']' 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73569 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73569 00:18:24.031 killing process with pid 73569 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73569' 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73569 00:18:24.031 10:30:23 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73569 00:18:26.566 10:30:25 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:26.566 10:30:25 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:26.566 10:30:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:26.566 10:30:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.566 10:30:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:26.566 ************************************ 00:18:26.566 START TEST bdev_hello_world 00:18:26.566 ************************************ 00:18:26.566 10:30:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:26.566 [2024-12-07 10:30:25.696372] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:26.566 [2024-12-07 10:30:25.696524] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73875 ] 00:18:26.566 [2024-12-07 10:30:25.879654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.825 [2024-12-07 10:30:25.990253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.101 [2024-12-07 10:30:26.420504] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:27.101 [2024-12-07 10:30:26.420555] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:18:27.101 [2024-12-07 10:30:26.420573] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:27.101 [2024-12-07 10:30:26.422563] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:27.101 [2024-12-07 10:30:26.422900] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:27.101 [2024-12-07 10:30:26.422925] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:27.101 [2024-12-07 10:30:26.423153] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:27.101 00:18:27.101 [2024-12-07 10:30:26.423176] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:28.548 00:18:28.548 real 0m1.913s 00:18:28.548 user 0m1.515s 00:18:28.548 sys 0m0.276s 00:18:28.548 10:30:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.548 10:30:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:28.548 ************************************ 00:18:28.548 END TEST bdev_hello_world 00:18:28.548 ************************************ 00:18:28.548 10:30:27 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:28.548 10:30:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:28.548 10:30:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.548 10:30:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.548 ************************************ 00:18:28.548 START TEST bdev_bounds 00:18:28.548 ************************************ 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73918 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:28.548 Process bdevio pid: 73918 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73918' 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73918 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73918 ']' 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.548 10:30:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:28.548 [2024-12-07 10:30:27.685082] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:28.548 [2024-12-07 10:30:27.685250] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73918 ] 00:18:28.548 [2024-12-07 10:30:27.872481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:28.807 [2024-12-07 10:30:27.979280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.807 [2024-12-07 10:30:27.979471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.807 [2024-12-07 10:30:27.979482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.374 10:30:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.374 10:30:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:29.374 10:30:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:29.374 I/O targets: 00:18:29.374 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:29.374 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:29.374 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:29.374 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:29.374 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:29.374 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:29.374 00:18:29.374 00:18:29.374 CUnit - A unit testing framework for C - Version 2.1-3 00:18:29.374 http://cunit.sourceforge.net/ 00:18:29.374 00:18:29.374 00:18:29.374 Suite: bdevio tests on: nvme3n1 00:18:29.374 Test: blockdev write read block ...passed 00:18:29.374 Test: blockdev write zeroes read block ...passed 00:18:29.374 Test: blockdev write zeroes read no split ...passed 00:18:29.374 Test: blockdev write zeroes read split ...passed 00:18:29.374 Test: blockdev write zeroes read split partial ...passed 00:18:29.374 Test: blockdev reset ...passed 00:18:29.374 Test: blockdev write read 8 blocks ...passed 00:18:29.374 Test: blockdev write read size > 128k ...passed 00:18:29.374 Test: blockdev write read invalid size ...passed 00:18:29.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.374 Test: blockdev write read max offset ...passed 00:18:29.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.374 Test: blockdev writev readv 8 blocks ...passed 00:18:29.374 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.374 Test: blockdev writev readv block ...passed 00:18:29.374 Test: blockdev writev readv size > 128k ...passed 00:18:29.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.374 Test: blockdev comparev and writev ...passed 00:18:29.374 Test: blockdev nvme passthru rw ...passed 00:18:29.374 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.374 Test: blockdev nvme admin passthru ...passed 00:18:29.374 Test: blockdev copy ...passed 00:18:29.374 Suite: bdevio tests on: nvme2n1 00:18:29.374 Test: blockdev write read block ...passed 00:18:29.374 Test: blockdev write zeroes read block ...passed 00:18:29.374 Test: blockdev write zeroes read no split ...passed 00:18:29.634 Test: blockdev write zeroes read split ...passed 00:18:29.634 Test: blockdev write zeroes read split partial ...passed 00:18:29.634 Test: blockdev reset ...passed 00:18:29.634 Test: blockdev write read 8 blocks ...passed 00:18:29.634 Test: blockdev write read size > 128k ...passed 00:18:29.634 Test: blockdev write read invalid size ...passed 00:18:29.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.634 Test: blockdev write read max offset ...passed 00:18:29.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.634 Test: blockdev writev readv 8 blocks ...passed 00:18:29.634 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.634 Test: blockdev writev readv block ...passed 00:18:29.634 Test: blockdev writev readv size > 128k ...passed 00:18:29.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.634 Test: blockdev comparev and writev ...passed 00:18:29.634 Test: blockdev nvme passthru rw ...passed 00:18:29.634 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.634 Test: blockdev nvme admin passthru ...passed 00:18:29.634 Test: blockdev copy ...passed 00:18:29.634 Suite: bdevio tests on: nvme1n1 00:18:29.634 Test: blockdev write read block ...passed 00:18:29.634 Test: blockdev write zeroes read block ...passed 00:18:29.634 Test: blockdev write zeroes read no split ...passed 00:18:29.634 Test: blockdev write zeroes read split ...passed 00:18:29.634 Test: blockdev write zeroes read split partial ...passed 00:18:29.634 Test: blockdev reset ...passed 00:18:29.634 Test: blockdev write read 8 blocks ...passed 00:18:29.634 Test: blockdev write read size > 128k ...passed 00:18:29.634 Test: blockdev write read invalid size ...passed 00:18:29.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.634 Test: blockdev write read max offset ...passed 00:18:29.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.634 Test: blockdev writev readv 8 blocks ...passed 00:18:29.634 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.634 Test: blockdev writev readv block ...passed 00:18:29.634 Test: blockdev writev readv size > 128k ...passed 00:18:29.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.634 Test: blockdev comparev and writev ...passed 00:18:29.634 Test: blockdev nvme passthru rw ...passed 00:18:29.634 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.634 Test: blockdev nvme admin passthru ...passed 00:18:29.634 Test: blockdev copy ...passed 00:18:29.634 Suite: bdevio tests on: nvme0n3 00:18:29.634 Test: blockdev write read block ...passed 00:18:29.634 Test: blockdev write zeroes read block ...passed 00:18:29.634 Test: blockdev write zeroes read no split ...passed 00:18:29.634 Test: blockdev write zeroes read split ...passed 00:18:29.634 Test: blockdev write zeroes read split partial ...passed 00:18:29.634 Test: blockdev reset ...passed 00:18:29.634 Test: blockdev write read 8 blocks ...passed 00:18:29.634 Test: blockdev write read size > 128k ...passed 00:18:29.634 Test: blockdev write read invalid size ...passed 00:18:29.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.634 Test: blockdev write read max offset ...passed 00:18:29.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.634 Test: blockdev writev readv 8 blocks ...passed 00:18:29.634 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.634 Test: blockdev writev readv block ...passed 00:18:29.634 Test: blockdev writev readv size > 128k ...passed 00:18:29.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.634 Test: blockdev comparev and writev ...passed 00:18:29.634 Test: blockdev nvme passthru rw ...passed 00:18:29.634 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.634 Test: blockdev nvme admin passthru ...passed 00:18:29.634 Test: blockdev copy ...passed 00:18:29.634 Suite: bdevio tests on: nvme0n2 00:18:29.634 Test: blockdev write read block ...passed 00:18:29.634 Test: blockdev write zeroes read block ...passed 00:18:29.634 Test: blockdev write zeroes read no split ...passed 00:18:29.893 Test: blockdev write zeroes read split ...passed 00:18:29.893 Test: blockdev write zeroes read split partial ...passed 00:18:29.893 Test: blockdev reset ...passed 00:18:29.893 Test: blockdev write read 8 blocks ...passed 00:18:29.893 Test: blockdev write read size > 128k ...passed 00:18:29.893 Test: blockdev write read invalid size ...passed 00:18:29.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.893 Test: blockdev write read max offset ...passed 00:18:29.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.893 Test: blockdev writev readv 8 blocks ...passed 00:18:29.893 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.893 Test: blockdev writev readv block ...passed 00:18:29.893 Test: blockdev writev readv size > 128k ...passed 00:18:29.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.893 Test: blockdev comparev and writev ...passed 00:18:29.893 Test: blockdev nvme passthru rw ...passed 00:18:29.894 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.894 Test: blockdev nvme admin passthru ...passed 00:18:29.894 Test: blockdev copy ...passed 00:18:29.894 Suite: bdevio tests on: nvme0n1 00:18:29.894 Test: blockdev write read block ...passed 00:18:29.894 Test: blockdev write zeroes read block ...passed 00:18:29.894 Test: blockdev write zeroes read no split ...passed 00:18:29.894 Test: blockdev write zeroes read split ...passed 00:18:29.894 Test: blockdev write zeroes read split partial ...passed 00:18:29.894 Test: blockdev reset ...passed 00:18:29.894 Test: blockdev write read 8 blocks ...passed 00:18:29.894 Test: blockdev write read size > 128k ...passed 00:18:29.894 Test: blockdev write read invalid size ...passed 00:18:29.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.894 Test: blockdev write read max offset ...passed 00:18:29.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.894 Test: blockdev writev readv 8 blocks ...passed 00:18:29.894 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.894 Test: blockdev writev readv block ...passed 00:18:29.894 Test: blockdev writev readv size > 128k ...passed 00:18:29.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.894 Test: blockdev comparev and writev ...passed 00:18:29.894 Test: blockdev nvme passthru rw ...passed 00:18:29.894 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.894 Test: blockdev nvme admin passthru ...passed 00:18:29.894 Test: blockdev copy ...passed 00:18:29.894 00:18:29.894 Run Summary: Type Total Ran Passed Failed Inactive 00:18:29.894 suites 6 6 n/a 0 0 00:18:29.894 tests 138 138 138 0 0 00:18:29.894 asserts 780 780 780 0 n/a 00:18:29.894 00:18:29.894 Elapsed time = 1.385 seconds 00:18:29.894 0 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73918 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73918 ']' 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73918 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73918 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.894 killing process with pid 73918 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73918' 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73918 00:18:29.894 10:30:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73918 00:18:31.292 10:30:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:31.292 00:18:31.292 real 0m2.702s 00:18:31.292 user 0m6.650s 00:18:31.292 sys 0m0.443s 00:18:31.292 10:30:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.292 ************************************ 00:18:31.292 END TEST bdev_bounds 00:18:31.292 ************************************ 00:18:31.292 10:30:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 10:30:30 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:31.292 10:30:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:31.292 10:30:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.292 10:30:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 ************************************ 00:18:31.292 START TEST bdev_nbd 00:18:31.292 ************************************ 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73972 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73972 /var/tmp/spdk-nbd.sock 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73972 ']' 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.292 10:30:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 [2024-12-07 10:30:30.464446] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:31.292 [2024-12-07 10:30:30.464566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.551 [2024-12-07 10:30:30.646377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.551 [2024-12-07 10:30:30.754457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.120 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.378 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.379 1+0 records in 00:18:32.379 1+0 records out 00:18:32.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691179 s, 5.9 MB/s 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.379 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.638 1+0 records in 00:18:32.638 1+0 records out 00:18:32.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879611 s, 4.7 MB/s 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.638 10:30:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.929 1+0 records in 00:18:32.929 1+0 records out 00:18:32.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746109 s, 5.5 MB/s 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.929 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.188 1+0 records in 00:18:33.188 1+0 records out 00:18:33.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784245 s, 5.2 MB/s 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:33.188 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.448 1+0 records in 00:18:33.448 1+0 records out 00:18:33.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792995 s, 5.2 MB/s 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:33.448 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.707 1+0 records in 00:18:33.707 1+0 records out 00:18:33.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103898 s, 3.9 MB/s 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:33.707 10:30:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.707 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd0", 00:18:33.707 "bdev_name": "nvme0n1" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd1", 00:18:33.707 "bdev_name": "nvme0n2" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd2", 00:18:33.707 "bdev_name": "nvme0n3" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd3", 00:18:33.707 "bdev_name": "nvme1n1" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd4", 00:18:33.707 "bdev_name": "nvme2n1" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd5", 00:18:33.707 "bdev_name": "nvme3n1" 00:18:33.707 } 00:18:33.707 ]' 00:18:33.707 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:33.707 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:33.707 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd0", 00:18:33.707 "bdev_name": "nvme0n1" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd1", 00:18:33.707 "bdev_name": "nvme0n2" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd2", 00:18:33.707 "bdev_name": "nvme0n3" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd3", 00:18:33.707 "bdev_name": "nvme1n1" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd4", 00:18:33.707 "bdev_name": "nvme2n1" 00:18:33.707 }, 00:18:33.707 { 00:18:33.707 "nbd_device": "/dev/nbd5", 00:18:33.707 "bdev_name": "nvme3n1" 00:18:33.707 } 00:18:33.707 ]' 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.966 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:33.967 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.967 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.967 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.225 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.484 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.742 10:30:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.000 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.259 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.518 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:35.519 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:35.778 /dev/nbd0 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:35.778 1+0 records in 00:18:35.778 1+0 records out 00:18:35.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586046 s, 7.0 MB/s 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:35.778 10:30:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:35.779 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:35.779 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:35.779 10:30:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:18:36.037 /dev/nbd1 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.037 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.038 1+0 records in 00:18:36.038 1+0 records out 00:18:36.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055108 s, 7.4 MB/s 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.038 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:18:36.038 /dev/nbd10 00:18:36.296 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:36.296 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:36.296 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:36.296 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.296 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.297 1+0 records in 00:18:36.297 1+0 records out 00:18:36.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678833 s, 6.0 MB/s 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.297 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:18:36.297 /dev/nbd11 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.555 1+0 records in 00:18:36.555 1+0 records out 00:18:36.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660284 s, 6.2 MB/s 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.555 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:18:36.555 /dev/nbd12 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.813 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.814 1+0 records in 00:18:36.814 1+0 records out 00:18:36.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111651 s, 3.7 MB/s 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.814 10:30:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:36.814 /dev/nbd13 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.814 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.072 1+0 records in 00:18:37.072 1+0 records out 00:18:37.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701226 s, 5.8 MB/s 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd0", 00:18:37.072 "bdev_name": "nvme0n1" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd1", 00:18:37.072 "bdev_name": "nvme0n2" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd10", 00:18:37.072 "bdev_name": "nvme0n3" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd11", 00:18:37.072 "bdev_name": "nvme1n1" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd12", 00:18:37.072 "bdev_name": "nvme2n1" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd13", 00:18:37.072 "bdev_name": "nvme3n1" 00:18:37.072 } 00:18:37.072 ]' 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd0", 00:18:37.072 "bdev_name": "nvme0n1" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd1", 00:18:37.072 "bdev_name": "nvme0n2" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd10", 00:18:37.072 "bdev_name": "nvme0n3" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd11", 00:18:37.072 "bdev_name": "nvme1n1" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd12", 00:18:37.072 "bdev_name": "nvme2n1" 00:18:37.072 }, 00:18:37.072 { 00:18:37.072 "nbd_device": "/dev/nbd13", 00:18:37.072 "bdev_name": "nvme3n1" 00:18:37.072 } 00:18:37.072 ]' 00:18:37.072 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:37.331 /dev/nbd1 00:18:37.331 /dev/nbd10 00:18:37.331 /dev/nbd11 00:18:37.331 /dev/nbd12 00:18:37.331 /dev/nbd13' 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:37.331 /dev/nbd1 00:18:37.331 /dev/nbd10 00:18:37.331 /dev/nbd11 00:18:37.331 /dev/nbd12 00:18:37.331 /dev/nbd13' 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:37.331 256+0 records in 00:18:37.331 256+0 records out 00:18:37.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113822 s, 92.1 MB/s 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:37.331 256+0 records in 00:18:37.331 256+0 records out 00:18:37.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127654 s, 8.2 MB/s 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.331 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:37.590 256+0 records in 00:18:37.590 256+0 records out 00:18:37.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129291 s, 8.1 MB/s 00:18:37.590 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.590 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:37.590 256+0 records in 00:18:37.590 256+0 records out 00:18:37.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130963 s, 8.0 MB/s 00:18:37.590 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.590 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:37.849 256+0 records in 00:18:37.849 256+0 records out 00:18:37.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131003 s, 8.0 MB/s 00:18:37.850 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.850 10:30:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:37.850 256+0 records in 00:18:37.850 256+0 records out 00:18:37.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131913 s, 7.9 MB/s 00:18:37.850 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.850 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:38.109 256+0 records in 00:18:38.109 256+0 records out 00:18:38.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161192 s, 6.5 MB/s 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:38.109 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.110 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:38.110 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:38.110 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:38.110 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.110 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.369 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.629 10:30:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.888 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.148 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.407 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:39.666 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:39.667 10:30:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:39.926 malloc_lvol_verify 00:18:39.926 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:40.185 94821215-f2a7-4724-a060-80a750e02793 00:18:40.185 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:40.444 04100915-fb8b-4778-9c84-42c0321cad8c 00:18:40.444 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:40.444 /dev/nbd0 00:18:40.704 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:40.704 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:40.704 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:40.704 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:40.704 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:40.704 mke2fs 1.47.0 (5-Feb-2023) 00:18:40.704 Discarding device blocks: 0/4096 done 00:18:40.704 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:40.704 00:18:40.704 Allocating group tables: 0/1 done 00:18:40.704 Writing inode tables: 0/1 done 00:18:40.704 Creating journal (1024 blocks): done 00:18:40.704 Writing superblocks and filesystem accounting information: 0/1 done 00:18:40.704 00:18:40.704 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:40.705 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.705 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.705 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.705 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:40.705 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.705 10:30:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73972 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73972 ']' 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73972 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.705 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73972 00:18:40.964 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.964 killing process with pid 73972 00:18:40.964 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.964 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73972' 00:18:40.964 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73972 00:18:40.964 10:30:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73972 00:18:41.902 10:30:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:41.902 00:18:41.902 real 0m10.847s 00:18:41.902 user 0m13.666s 00:18:41.902 sys 0m4.839s 00:18:41.902 10:30:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.902 10:30:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:41.902 ************************************ 00:18:41.902 END TEST bdev_nbd 00:18:41.902 ************************************ 00:18:42.162 10:30:41 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:42.162 10:30:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:18:42.162 10:30:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:18:42.162 10:30:41 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:42.162 10:30:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.162 10:30:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.162 10:30:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.162 ************************************ 00:18:42.162 START TEST bdev_fio 00:18:42.162 ************************************ 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:42.162 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:42.162 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:42.163 ************************************ 00:18:42.163 START TEST bdev_fio_rw_verify 00:18:42.163 ************************************ 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:42.163 10:30:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.423 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.423 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.423 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.423 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.423 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.423 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.423 fio-3.35 00:18:42.423 Starting 6 threads 00:18:54.634 00:18:54.634 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74385: Sat Dec 7 10:30:52 2024 00:18:54.634 read: IOPS=31.4k, BW=123MiB/s (129MB/s)(1229MiB/10001msec) 00:18:54.634 slat (usec): min=2, max=7422, avg= 8.55, stdev=16.42 00:18:54.634 clat (usec): min=88, max=23412, avg=544.89, stdev=287.96 00:18:54.634 lat (usec): min=93, max=23435, avg=553.44, stdev=289.57 00:18:54.634 clat percentiles (usec): 00:18:54.634 | 50.000th=[ 523], 99.000th=[ 1287], 99.900th=[ 2114], 99.990th=[ 4555], 00:18:54.634 | 99.999th=[23462] 00:18:54.634 write: IOPS=31.8k, BW=124MiB/s (130MB/s)(1242MiB/10001msec); 0 zone resets 00:18:54.634 slat (usec): min=10, max=5051, avg=29.35, stdev=53.37 00:18:54.634 clat (usec): min=85, max=14425, avg=704.92, stdev=349.18 00:18:54.634 lat (usec): min=97, max=14487, avg=734.27, stdev=359.54 00:18:54.634 clat percentiles (usec): 00:18:54.634 | 50.000th=[ 668], 99.000th=[ 1713], 99.900th=[ 3654], 99.990th=[ 8455], 00:18:54.634 | 99.999th=[11338] 00:18:54.634 bw ( KiB/s): min=90056, max=150158, per=99.91%, avg=127047.37, stdev=2795.94, samples=114 00:18:54.634 iops : min=22514, max=37538, avg=31761.21, stdev=698.96, samples=114 00:18:54.634 lat (usec) : 100=0.01%, 250=7.11%, 500=29.11%, 750=35.67%, 1000=19.32% 00:18:54.634 lat (msec) : 2=8.50%, 4=0.25%, 10=0.05%, 20=0.01%, 50=0.01% 00:18:54.634 cpu : usr=51.04%, sys=31.32%, ctx=7803, majf=0, minf=26386 00:18:54.634 IO depths : 1=11.7%, 2=24.0%, 4=51.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.634 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.634 issued rwts: total=314517,317935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.634 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:54.634 00:18:54.634 Run status group 0 (all jobs): 00:18:54.634 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=1229MiB (1288MB), run=10001-10001msec 00:18:54.634 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=1242MiB (1302MB), run=10001-10001msec 00:18:54.634 ----------------------------------------------------- 00:18:54.634 Suppressions used: 00:18:54.634 count bytes template 00:18:54.634 6 48 /usr/src/fio/parse.c 00:18:54.634 3205 307680 /usr/src/fio/iolog.c 00:18:54.634 1 8 libtcmalloc_minimal.so 00:18:54.634 1 904 libcrypto.so 00:18:54.634 ----------------------------------------------------- 00:18:54.634 00:18:54.634 00:18:54.634 real 0m12.555s 00:18:54.634 user 0m32.679s 00:18:54.634 sys 0m19.210s 00:18:54.634 10:30:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.634 10:30:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:54.634 ************************************ 00:18:54.634 END TEST bdev_fio_rw_verify 00:18:54.634 ************************************ 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:54.893 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e7613b96-e2da-4c55-a222-a24e7315486f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e7613b96-e2da-4c55-a222-a24e7315486f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "327e0831-b5ec-4dc7-a44e-f56b52ab926f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "327e0831-b5ec-4dc7-a44e-f56b52ab926f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "673f26d2-d61d-42df-99ba-09fc6d6ed759"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "673f26d2-d61d-42df-99ba-09fc6d6ed759",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0bd6e783-70b9-43c6-9528-5d95680e2d96"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0bd6e783-70b9-43c6-9528-5d95680e2d96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "af9370d7-1df0-41f1-9854-0c18274fafe9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "af9370d7-1df0-41f1-9854-0c18274fafe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c43d630c-e677-4f0a-bb13-43c0e10a9ecb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c43d630c-e677-4f0a-bb13-43c0e10a9ecb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.894 /home/vagrant/spdk_repo/spdk 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:54.894 00:18:54.894 real 0m12.797s 00:18:54.894 user 0m32.798s 00:18:54.894 sys 0m19.335s 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.894 ************************************ 00:18:54.894 END TEST bdev_fio 00:18:54.894 ************************************ 00:18:54.894 10:30:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:54.894 10:30:54 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:54.894 10:30:54 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:54.894 10:30:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:54.894 10:30:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.894 10:30:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:54.894 ************************************ 00:18:54.894 START TEST bdev_verify 00:18:54.894 ************************************ 00:18:54.894 10:30:54 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:55.153 [2024-12-07 10:30:54.253700] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:55.153 [2024-12-07 10:30:54.253861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74566 ] 00:18:55.153 [2024-12-07 10:30:54.439687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:55.413 [2024-12-07 10:30:54.552416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.413 [2024-12-07 10:30:54.552437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.672 Running I/O for 5 seconds... 00:18:57.999 23712.00 IOPS, 92.62 MiB/s [2024-12-07T10:30:58.309Z] 22656.00 IOPS, 88.50 MiB/s [2024-12-07T10:30:59.247Z] 21845.33 IOPS, 85.33 MiB/s [2024-12-07T10:31:00.196Z] 21576.00 IOPS, 84.28 MiB/s [2024-12-07T10:31:00.196Z] 21780.60 IOPS, 85.08 MiB/s 00:19:00.843 Latency(us) 00:19:00.843 [2024-12-07T10:31:00.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.843 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x0 length 0x80000 00:19:00.843 nvme0n1 : 5.03 1525.48 5.96 0.00 0.00 83774.71 11949.13 79590.71 00:19:00.843 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x80000 length 0x80000 00:19:00.843 nvme0n1 : 5.06 1745.16 6.82 0.00 0.00 73242.59 9527.72 85486.32 00:19:00.843 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x0 length 0x80000 00:19:00.843 nvme0n2 : 5.04 1525.00 5.96 0.00 0.00 83656.86 10264.67 74958.44 00:19:00.843 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x80000 length 0x80000 00:19:00.843 nvme0n2 : 5.03 1754.55 6.85 0.00 0.00 72785.70 12686.09 80854.05 00:19:00.843 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x0 length 0x80000 00:19:00.843 nvme0n3 : 5.06 1516.38 5.92 0.00 0.00 83993.08 19266.00 83380.74 00:19:00.843 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x80000 length 0x80000 00:19:00.843 nvme0n3 : 5.09 1760.29 6.88 0.00 0.00 72476.23 15475.97 82538.51 00:19:00.843 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x0 length 0x20000 00:19:00.843 nvme1n1 : 5.08 1538.03 6.01 0.00 0.00 82667.22 15160.13 67378.38 00:19:00.843 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x20000 length 0x20000 00:19:00.843 nvme1n1 : 5.08 1737.89 6.79 0.00 0.00 73332.31 13001.92 84222.97 00:19:00.843 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x0 length 0xa0000 00:19:00.843 nvme2n1 : 5.07 1515.41 5.92 0.00 0.00 83785.71 15791.81 81275.17 00:19:00.843 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0xa0000 length 0xa0000 00:19:00.843 nvme2n1 : 5.07 1742.76 6.81 0.00 0.00 73045.57 8527.58 74116.22 00:19:00.843 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.843 Verification LBA range: start 0x0 length 0xbd0bd 00:19:00.844 nvme3n1 : 5.08 2388.91 9.33 0.00 0.00 53040.96 3974.27 66957.26 00:19:00.844 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.844 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:00.844 nvme3n1 : 5.09 2738.37 10.70 0.00 0.00 46365.74 5316.58 58534.97 00:19:00.844 [2024-12-07T10:31:00.197Z] =================================================================================================================== 00:19:00.844 [2024-12-07T10:31:00.197Z] Total : 21488.25 83.94 0.00 0.00 71094.52 3974.27 85486.32 00:19:02.294 00:19:02.294 real 0m7.183s 00:19:02.294 user 0m10.967s 00:19:02.294 sys 0m1.942s 00:19:02.294 ************************************ 00:19:02.294 END TEST bdev_verify 00:19:02.294 ************************************ 00:19:02.294 10:31:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.294 10:31:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:02.294 10:31:01 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:02.294 10:31:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:02.294 10:31:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.294 10:31:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.294 ************************************ 00:19:02.294 START TEST bdev_verify_big_io 00:19:02.294 ************************************ 00:19:02.294 10:31:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:02.294 [2024-12-07 10:31:01.525554] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:02.294 [2024-12-07 10:31:01.525887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74673 ] 00:19:02.554 [2024-12-07 10:31:01.710327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.554 [2024-12-07 10:31:01.826781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.554 [2024-12-07 10:31:01.826813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.122 Running I/O for 5 seconds... 00:19:08.312 2179.00 IOPS, 136.19 MiB/s [2024-12-07T10:31:08.231Z] 3551.50 IOPS, 221.97 MiB/s [2024-12-07T10:31:09.167Z] 3835.33 IOPS, 239.71 MiB/s 00:19:09.814 Latency(us) 00:19:09.814 [2024-12-07T10:31:09.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.814 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x0 length 0x8000 00:19:09.814 nvme0n1 : 5.88 138.71 8.67 0.00 0.00 882796.89 11054.27 976986.47 00:19:09.814 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x8000 length 0x8000 00:19:09.814 nvme0n1 : 5.47 194.34 12.15 0.00 0.00 641416.33 4737.54 808540.53 00:19:09.814 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x0 length 0x8000 00:19:09.814 nvme0n2 : 5.88 127.79 7.99 0.00 0.00 927292.96 90118.58 1179121.61 00:19:09.814 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x8000 length 0x8000 00:19:09.814 nvme0n2 : 5.42 207.93 13.00 0.00 0.00 586030.05 27161.91 828754.04 00:19:09.814 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x0 length 0x8000 00:19:09.814 nvme0n3 : 6.17 142.65 8.92 0.00 0.00 826538.40 120438.85 1044364.85 00:19:09.814 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x8000 length 0x8000 00:19:09.814 nvme0n3 : 5.48 186.94 11.68 0.00 0.00 631424.93 93487.50 798433.77 00:19:09.814 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x0 length 0x2000 00:19:09.814 nvme1n1 : 6.16 122.10 7.63 0.00 0.00 928567.22 134756.76 1293664.85 00:19:09.814 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x2000 length 0x2000 00:19:09.814 nvme1n1 : 5.57 206.89 12.93 0.00 0.00 560991.34 63588.34 633356.75 00:19:09.814 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.814 Verification LBA range: start 0x0 length 0xa000 00:19:09.815 nvme2n1 : 6.17 150.39 9.40 0.00 0.00 751634.32 18950.17 1098267.55 00:19:09.815 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.815 Verification LBA range: start 0xa000 length 0xa000 00:19:09.815 nvme2n1 : 5.66 214.85 13.43 0.00 0.00 529060.61 47585.98 579454.05 00:19:09.815 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.815 Verification LBA range: start 0x0 length 0xbd0b 00:19:09.815 nvme3n1 : 6.40 157.78 9.86 0.00 0.00 697364.04 320.77 1246499.98 00:19:09.815 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.815 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:09.815 nvme3n1 : 6.40 232.56 14.53 0.00 0.00 468305.60 320.77 1098267.55 00:19:09.815 [2024-12-07T10:31:09.168Z] =================================================================================================================== 00:19:09.815 [2024-12-07T10:31:09.168Z] Total : 2082.91 130.18 0.00 0.00 673358.21 320.77 1293664.85 00:19:11.196 00:19:11.196 real 0m8.813s 00:19:11.196 user 0m16.045s 00:19:11.196 sys 0m0.570s 00:19:11.196 10:31:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.196 10:31:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.196 ************************************ 00:19:11.196 END TEST bdev_verify_big_io 00:19:11.196 ************************************ 00:19:11.196 10:31:10 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.196 10:31:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:11.196 10:31:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.196 10:31:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.196 ************************************ 00:19:11.196 START TEST bdev_write_zeroes 00:19:11.196 ************************************ 00:19:11.196 10:31:10 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.196 [2024-12-07 10:31:10.416480] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:11.196 [2024-12-07 10:31:10.416610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74807 ] 00:19:11.455 [2024-12-07 10:31:10.596866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.455 [2024-12-07 10:31:10.706631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.023 Running I/O for 1 seconds... 00:19:12.959 48800.00 IOPS, 190.62 MiB/s 00:19:12.959 Latency(us) 00:19:12.959 [2024-12-07T10:31:12.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.959 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.959 nvme0n1 : 1.03 7592.16 29.66 0.00 0.00 16846.57 7632.71 27583.02 00:19:12.959 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.959 nvme0n2 : 1.03 7584.77 29.63 0.00 0.00 16852.32 7790.62 28425.25 00:19:12.959 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.959 nvme0n3 : 1.03 7576.34 29.60 0.00 0.00 16859.22 7843.26 28846.37 00:19:12.959 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.959 nvme1n1 : 1.03 7568.23 29.56 0.00 0.00 16866.90 7843.26 28846.37 00:19:12.959 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.959 nvme2n1 : 1.03 7560.08 29.53 0.00 0.00 16874.14 7843.26 29267.48 00:19:12.959 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.959 nvme3n1 : 1.03 10870.64 42.46 0.00 0.00 11725.75 4237.47 23371.87 00:19:12.959 [2024-12-07T10:31:12.312Z] =================================================================================================================== 00:19:12.959 [2024-12-07T10:31:12.312Z] Total : 48752.22 190.44 0.00 0.00 15717.83 4237.47 29267.48 00:19:14.335 00:19:14.335 real 0m3.003s 00:19:14.335 user 0m2.242s 00:19:14.335 sys 0m0.570s 00:19:14.335 10:31:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.335 ************************************ 00:19:14.335 END TEST bdev_write_zeroes 00:19:14.335 10:31:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:14.335 ************************************ 00:19:14.335 10:31:13 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.335 10:31:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:14.335 10:31:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.335 10:31:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.335 ************************************ 00:19:14.335 START TEST bdev_json_nonenclosed 00:19:14.335 ************************************ 00:19:14.335 10:31:13 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.335 [2024-12-07 10:31:13.494312] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:14.335 [2024-12-07 10:31:13.494943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74865 ] 00:19:14.335 [2024-12-07 10:31:13.677182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.598 [2024-12-07 10:31:13.788827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.598 [2024-12-07 10:31:13.789153] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:14.598 [2024-12-07 10:31:13.789322] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:14.598 [2024-12-07 10:31:13.789359] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.857 00:19:14.857 real 0m0.640s 00:19:14.857 user 0m0.383s 00:19:14.857 sys 0m0.152s 00:19:14.857 ************************************ 00:19:14.857 END TEST bdev_json_nonenclosed 00:19:14.857 ************************************ 00:19:14.857 10:31:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.857 10:31:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:14.857 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.857 10:31:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:14.857 10:31:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.857 10:31:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.857 ************************************ 00:19:14.857 START TEST bdev_json_nonarray 00:19:14.857 ************************************ 00:19:14.857 10:31:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:15.116 [2024-12-07 10:31:14.210569] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:15.116 [2024-12-07 10:31:14.210830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74892 ] 00:19:15.116 [2024-12-07 10:31:14.391837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.374 [2024-12-07 10:31:14.506943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.374 [2024-12-07 10:31:14.507041] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:15.374 [2024-12-07 10:31:14.507065] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:15.374 [2024-12-07 10:31:14.507077] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:15.632 00:19:15.632 real 0m0.643s 00:19:15.632 user 0m0.394s 00:19:15.632 sys 0m0.144s 00:19:15.632 ************************************ 00:19:15.632 END TEST bdev_json_nonarray 00:19:15.632 ************************************ 00:19:15.632 10:31:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.632 10:31:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:15.632 10:31:14 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:16.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:19.856 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:19.856 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:19.856 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:19.856 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:19.856 00:19:19.856 real 0m59.265s 00:19:19.856 user 1m31.602s 00:19:19.856 sys 0m35.101s 00:19:19.856 10:31:18 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.856 ************************************ 00:19:19.856 END TEST blockdev_xnvme 00:19:19.856 ************************************ 00:19:19.856 10:31:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.856 10:31:19 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:19.856 10:31:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:19.857 10:31:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.857 10:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:19.857 ************************************ 00:19:19.857 START TEST ublk 00:19:19.857 ************************************ 00:19:19.857 10:31:19 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:19.857 * Looking for test storage... 00:19:19.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:19.857 10:31:19 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:19.857 10:31:19 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:19:19.857 10:31:19 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.117 10:31:19 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.117 10:31:19 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.117 10:31:19 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.117 10:31:19 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.117 10:31:19 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.117 10:31:19 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:20.117 10:31:19 ublk -- scripts/common.sh@345 -- # : 1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.117 10:31:19 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.117 10:31:19 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@353 -- # local d=1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.117 10:31:19 ublk -- scripts/common.sh@355 -- # echo 1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.117 10:31:19 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@353 -- # local d=2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.117 10:31:19 ublk -- scripts/common.sh@355 -- # echo 2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.117 10:31:19 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.117 10:31:19 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.117 10:31:19 ublk -- scripts/common.sh@368 -- # return 0 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.117 --rc genhtml_branch_coverage=1 00:19:20.117 --rc genhtml_function_coverage=1 00:19:20.117 --rc genhtml_legend=1 00:19:20.117 --rc geninfo_all_blocks=1 00:19:20.117 --rc geninfo_unexecuted_blocks=1 00:19:20.117 00:19:20.117 ' 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.117 --rc genhtml_branch_coverage=1 00:19:20.117 --rc genhtml_function_coverage=1 00:19:20.117 --rc genhtml_legend=1 00:19:20.117 --rc geninfo_all_blocks=1 00:19:20.117 --rc geninfo_unexecuted_blocks=1 00:19:20.117 00:19:20.117 ' 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.117 --rc genhtml_branch_coverage=1 00:19:20.117 --rc genhtml_function_coverage=1 00:19:20.117 --rc genhtml_legend=1 00:19:20.117 --rc geninfo_all_blocks=1 00:19:20.117 --rc geninfo_unexecuted_blocks=1 00:19:20.117 00:19:20.117 ' 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.117 --rc genhtml_branch_coverage=1 00:19:20.117 --rc genhtml_function_coverage=1 00:19:20.117 --rc genhtml_legend=1 00:19:20.117 --rc geninfo_all_blocks=1 00:19:20.117 --rc geninfo_unexecuted_blocks=1 00:19:20.117 00:19:20.117 ' 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:20.117 10:31:19 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:20.117 10:31:19 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:20.117 10:31:19 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:20.117 10:31:19 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:20.117 10:31:19 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:20.117 10:31:19 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:20.117 10:31:19 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:20.117 10:31:19 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:20.117 10:31:19 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.117 10:31:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.117 ************************************ 00:19:20.117 START TEST test_save_ublk_config 00:19:20.117 ************************************ 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75191 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75191 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75191 ']' 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.117 10:31:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:20.117 [2024-12-07 10:31:19.448845] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:20.117 [2024-12-07 10:31:19.451751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75191 ] 00:19:20.377 [2024-12-07 10:31:19.631140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.636 [2024-12-07 10:31:19.742716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.576 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.576 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:21.576 10:31:20 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:21.576 10:31:20 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:21.576 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.576 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:21.576 [2024-12-07 10:31:20.621003] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:21.576 [2024-12-07 10:31:20.621956] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:21.576 malloc0 00:19:21.576 [2024-12-07 10:31:20.700120] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:21.576 [2024-12-07 10:31:20.700208] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:21.576 [2024-12-07 10:31:20.700221] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:21.576 [2024-12-07 10:31:20.700229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:21.576 [2024-12-07 10:31:20.709129] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:21.576 [2024-12-07 10:31:20.709156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:21.577 [2024-12-07 10:31:20.710123] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:21.577 [2024-12-07 10:31:20.710672] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:21.577 [2024-12-07 10:31:20.719911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:21.577 0 00:19:21.577 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.577 10:31:20 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:21.577 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.577 10:31:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:21.837 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.837 10:31:21 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:21.837 "subsystems": [ 00:19:21.837 { 00:19:21.837 "subsystem": "fsdev", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "fsdev_set_opts", 00:19:21.837 "params": { 00:19:21.837 "fsdev_io_pool_size": 65535, 00:19:21.837 "fsdev_io_cache_size": 256 00:19:21.837 } 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "keyring", 00:19:21.837 "config": [] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "iobuf", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "iobuf_set_options", 00:19:21.837 "params": { 00:19:21.837 "small_pool_count": 8192, 00:19:21.837 "large_pool_count": 1024, 00:19:21.837 "small_bufsize": 8192, 00:19:21.837 "large_bufsize": 135168, 00:19:21.837 "enable_numa": false 00:19:21.837 } 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "sock", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "sock_set_default_impl", 00:19:21.837 "params": { 00:19:21.837 "impl_name": "posix" 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "sock_impl_set_options", 00:19:21.837 "params": { 00:19:21.837 "impl_name": "ssl", 00:19:21.837 "recv_buf_size": 4096, 00:19:21.837 "send_buf_size": 4096, 00:19:21.837 "enable_recv_pipe": true, 00:19:21.837 "enable_quickack": false, 00:19:21.837 "enable_placement_id": 0, 00:19:21.837 "enable_zerocopy_send_server": true, 00:19:21.837 "enable_zerocopy_send_client": false, 00:19:21.837 "zerocopy_threshold": 0, 00:19:21.837 "tls_version": 0, 00:19:21.837 "enable_ktls": false 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "sock_impl_set_options", 00:19:21.837 "params": { 00:19:21.837 "impl_name": "posix", 00:19:21.837 "recv_buf_size": 2097152, 00:19:21.837 "send_buf_size": 2097152, 00:19:21.837 "enable_recv_pipe": true, 00:19:21.837 "enable_quickack": false, 00:19:21.837 "enable_placement_id": 0, 00:19:21.837 "enable_zerocopy_send_server": true, 00:19:21.837 "enable_zerocopy_send_client": false, 00:19:21.837 "zerocopy_threshold": 0, 00:19:21.837 "tls_version": 0, 00:19:21.837 "enable_ktls": false 00:19:21.837 } 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "vmd", 00:19:21.837 "config": [] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "accel", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "accel_set_options", 00:19:21.837 "params": { 00:19:21.837 "small_cache_size": 128, 00:19:21.837 "large_cache_size": 16, 00:19:21.837 "task_count": 2048, 00:19:21.837 "sequence_count": 2048, 00:19:21.837 "buf_count": 2048 00:19:21.837 } 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "bdev", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "bdev_set_options", 00:19:21.837 "params": { 00:19:21.837 "bdev_io_pool_size": 65535, 00:19:21.837 "bdev_io_cache_size": 256, 00:19:21.837 "bdev_auto_examine": true, 00:19:21.837 "iobuf_small_cache_size": 128, 00:19:21.837 "iobuf_large_cache_size": 16 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "bdev_raid_set_options", 00:19:21.837 "params": { 00:19:21.837 "process_window_size_kb": 1024, 00:19:21.837 "process_max_bandwidth_mb_sec": 0 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "bdev_iscsi_set_options", 00:19:21.837 "params": { 00:19:21.837 "timeout_sec": 30 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "bdev_nvme_set_options", 00:19:21.837 "params": { 00:19:21.837 "action_on_timeout": "none", 00:19:21.837 "timeout_us": 0, 00:19:21.837 "timeout_admin_us": 0, 00:19:21.837 "keep_alive_timeout_ms": 10000, 00:19:21.837 "arbitration_burst": 0, 00:19:21.837 "low_priority_weight": 0, 00:19:21.837 "medium_priority_weight": 0, 00:19:21.837 "high_priority_weight": 0, 00:19:21.837 "nvme_adminq_poll_period_us": 10000, 00:19:21.837 "nvme_ioq_poll_period_us": 0, 00:19:21.837 "io_queue_requests": 0, 00:19:21.837 "delay_cmd_submit": true, 00:19:21.837 "transport_retry_count": 4, 00:19:21.837 "bdev_retry_count": 3, 00:19:21.837 "transport_ack_timeout": 0, 00:19:21.837 "ctrlr_loss_timeout_sec": 0, 00:19:21.837 "reconnect_delay_sec": 0, 00:19:21.837 "fast_io_fail_timeout_sec": 0, 00:19:21.837 "disable_auto_failback": false, 00:19:21.837 "generate_uuids": false, 00:19:21.837 "transport_tos": 0, 00:19:21.837 "nvme_error_stat": false, 00:19:21.837 "rdma_srq_size": 0, 00:19:21.837 "io_path_stat": false, 00:19:21.837 "allow_accel_sequence": false, 00:19:21.837 "rdma_max_cq_size": 0, 00:19:21.837 "rdma_cm_event_timeout_ms": 0, 00:19:21.837 "dhchap_digests": [ 00:19:21.837 "sha256", 00:19:21.837 "sha384", 00:19:21.837 "sha512" 00:19:21.837 ], 00:19:21.837 "dhchap_dhgroups": [ 00:19:21.837 "null", 00:19:21.837 "ffdhe2048", 00:19:21.837 "ffdhe3072", 00:19:21.837 "ffdhe4096", 00:19:21.837 "ffdhe6144", 00:19:21.837 "ffdhe8192" 00:19:21.837 ] 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "bdev_nvme_set_hotplug", 00:19:21.837 "params": { 00:19:21.837 "period_us": 100000, 00:19:21.837 "enable": false 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "bdev_malloc_create", 00:19:21.837 "params": { 00:19:21.837 "name": "malloc0", 00:19:21.837 "num_blocks": 8192, 00:19:21.837 "block_size": 4096, 00:19:21.837 "physical_block_size": 4096, 00:19:21.837 "uuid": "94175a09-fb9a-4d0b-9774-0c9a1552d648", 00:19:21.837 "optimal_io_boundary": 0, 00:19:21.837 "md_size": 0, 00:19:21.837 "dif_type": 0, 00:19:21.837 "dif_is_head_of_md": false, 00:19:21.837 "dif_pi_format": 0 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "bdev_wait_for_examine" 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "scsi", 00:19:21.837 "config": null 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "scheduler", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "framework_set_scheduler", 00:19:21.837 "params": { 00:19:21.837 "name": "static" 00:19:21.837 } 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "vhost_scsi", 00:19:21.837 "config": [] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "vhost_blk", 00:19:21.837 "config": [] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "ublk", 00:19:21.837 "config": [ 00:19:21.837 { 00:19:21.837 "method": "ublk_create_target", 00:19:21.837 "params": { 00:19:21.837 "cpumask": "1" 00:19:21.837 } 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "method": "ublk_start_disk", 00:19:21.837 "params": { 00:19:21.837 "bdev_name": "malloc0", 00:19:21.837 "ublk_id": 0, 00:19:21.837 "num_queues": 1, 00:19:21.837 "queue_depth": 128 00:19:21.837 } 00:19:21.837 } 00:19:21.837 ] 00:19:21.837 }, 00:19:21.837 { 00:19:21.837 "subsystem": "nbd", 00:19:21.838 "config": [] 00:19:21.838 }, 00:19:21.838 { 00:19:21.838 "subsystem": "nvmf", 00:19:21.838 "config": [ 00:19:21.838 { 00:19:21.838 "method": "nvmf_set_config", 00:19:21.838 "params": { 00:19:21.838 "discovery_filter": "match_any", 00:19:21.838 "admin_cmd_passthru": { 00:19:21.838 "identify_ctrlr": false 00:19:21.838 }, 00:19:21.838 "dhchap_digests": [ 00:19:21.838 "sha256", 00:19:21.838 "sha384", 00:19:21.838 "sha512" 00:19:21.838 ], 00:19:21.838 "dhchap_dhgroups": [ 00:19:21.838 "null", 00:19:21.838 "ffdhe2048", 00:19:21.838 "ffdhe3072", 00:19:21.838 "ffdhe4096", 00:19:21.838 "ffdhe6144", 00:19:21.838 "ffdhe8192" 00:19:21.838 ] 00:19:21.838 } 00:19:21.838 }, 00:19:21.838 { 00:19:21.838 "method": "nvmf_set_max_subsystems", 00:19:21.838 "params": { 00:19:21.838 "max_subsystems": 1024 00:19:21.838 } 00:19:21.838 }, 00:19:21.838 { 00:19:21.838 "method": "nvmf_set_crdt", 00:19:21.838 "params": { 00:19:21.838 "crdt1": 0, 00:19:21.838 "crdt2": 0, 00:19:21.838 "crdt3": 0 00:19:21.838 } 00:19:21.838 } 00:19:21.838 ] 00:19:21.838 }, 00:19:21.838 { 00:19:21.838 "subsystem": "iscsi", 00:19:21.838 "config": [ 00:19:21.838 { 00:19:21.838 "method": "iscsi_set_options", 00:19:21.838 "params": { 00:19:21.838 "node_base": "iqn.2016-06.io.spdk", 00:19:21.838 "max_sessions": 128, 00:19:21.838 "max_connections_per_session": 2, 00:19:21.838 "max_queue_depth": 64, 00:19:21.838 "default_time2wait": 2, 00:19:21.838 "default_time2retain": 20, 00:19:21.838 "first_burst_length": 8192, 00:19:21.838 "immediate_data": true, 00:19:21.838 "allow_duplicated_isid": false, 00:19:21.838 "error_recovery_level": 0, 00:19:21.838 "nop_timeout": 60, 00:19:21.838 "nop_in_interval": 30, 00:19:21.838 "disable_chap": false, 00:19:21.838 "require_chap": false, 00:19:21.838 "mutual_chap": false, 00:19:21.838 "chap_group": 0, 00:19:21.838 "max_large_datain_per_connection": 64, 00:19:21.838 "max_r2t_per_connection": 4, 00:19:21.838 "pdu_pool_size": 36864, 00:19:21.838 "immediate_data_pool_size": 16384, 00:19:21.838 "data_out_pool_size": 2048 00:19:21.838 } 00:19:21.838 } 00:19:21.838 ] 00:19:21.838 } 00:19:21.838 ] 00:19:21.838 }' 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75191 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75191 ']' 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75191 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75191 00:19:21.838 killing process with pid 75191 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75191' 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75191 00:19:21.838 10:31:21 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75191 00:19:23.218 [2024-12-07 10:31:22.449494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:23.218 [2024-12-07 10:31:22.493066] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:23.218 [2024-12-07 10:31:22.493178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:23.218 [2024-12-07 10:31:22.501039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:23.218 [2024-12-07 10:31:22.501265] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:23.218 [2024-12-07 10:31:22.501317] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:23.218 [2024-12-07 10:31:22.501422] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:23.218 [2024-12-07 10:31:22.501739] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:25.757 10:31:24 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75268 00:19:25.757 10:31:24 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75268 00:19:25.757 10:31:24 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:25.757 10:31:24 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:25.757 "subsystems": [ 00:19:25.757 { 00:19:25.757 "subsystem": "fsdev", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "fsdev_set_opts", 00:19:25.757 "params": { 00:19:25.757 "fsdev_io_pool_size": 65535, 00:19:25.757 "fsdev_io_cache_size": 256 00:19:25.757 } 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "keyring", 00:19:25.757 "config": [] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "iobuf", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "iobuf_set_options", 00:19:25.757 "params": { 00:19:25.757 "small_pool_count": 8192, 00:19:25.757 "large_pool_count": 1024, 00:19:25.757 "small_bufsize": 8192, 00:19:25.757 "large_bufsize": 135168, 00:19:25.757 "enable_numa": false 00:19:25.757 } 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "sock", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "sock_set_default_impl", 00:19:25.757 "params": { 00:19:25.757 "impl_name": "posix" 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "sock_impl_set_options", 00:19:25.757 "params": { 00:19:25.757 "impl_name": "ssl", 00:19:25.757 "recv_buf_size": 4096, 00:19:25.757 "send_buf_size": 4096, 00:19:25.757 "enable_recv_pipe": true, 00:19:25.757 "enable_quickack": false, 00:19:25.757 "enable_placement_id": 0, 00:19:25.757 "enable_zerocopy_send_server": true, 00:19:25.757 "enable_zerocopy_send_client": false, 00:19:25.757 "zerocopy_threshold": 0, 00:19:25.757 "tls_version": 0, 00:19:25.757 "enable_ktls": false 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "sock_impl_set_options", 00:19:25.757 "params": { 00:19:25.757 "impl_name": "posix", 00:19:25.757 "recv_buf_size": 2097152, 00:19:25.757 "send_buf_size": 2097152, 00:19:25.757 "enable_recv_pipe": true, 00:19:25.757 "enable_quickack": false, 00:19:25.757 "enable_placement_id": 0, 00:19:25.757 "enable_zerocopy_send_server": true, 00:19:25.757 "enable_zerocopy_send_client": false, 00:19:25.757 "zerocopy_threshold": 0, 00:19:25.757 "tls_version": 0, 00:19:25.757 "enable_ktls": false 00:19:25.757 } 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "vmd", 00:19:25.757 "config": [] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "accel", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "accel_set_options", 00:19:25.757 "params": { 00:19:25.757 "small_cache_size": 128, 00:19:25.757 "large_cache_size": 16, 00:19:25.757 "task_count": 2048, 00:19:25.757 "sequence_count": 2048, 00:19:25.757 "buf_count": 2048 00:19:25.757 } 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "bdev", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "bdev_set_options", 00:19:25.757 "params": { 00:19:25.757 "bdev_io_pool_size": 65535, 00:19:25.757 "bdev_io_cache_size": 256, 00:19:25.757 "bdev_auto_examine": true, 00:19:25.757 "iobuf_small_cache_size": 128, 00:19:25.757 "iobuf_large_cache_size": 16 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "bdev_raid_set_options", 00:19:25.757 "params": { 00:19:25.757 "process_window_size_kb": 1024, 00:19:25.757 "process_max_bandwidth_mb_sec": 0 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "bdev_iscsi_set_options", 00:19:25.757 "params": { 00:19:25.757 "timeout_sec": 30 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "bdev_nvme_set_options", 00:19:25.757 "params": { 00:19:25.757 "action_on_timeout": "none", 00:19:25.757 "timeout_us": 0, 00:19:25.757 "timeout_admin_us": 0, 00:19:25.757 "keep_alive_timeout_ms": 10000, 00:19:25.757 "arbitration_burst": 0, 00:19:25.757 "low_priority_weight": 0, 00:19:25.757 "medium_priority_weight": 0, 00:19:25.757 "high_priority_weight": 0, 00:19:25.757 "nvme_adminq_poll_period_us": 10000, 00:19:25.757 "nvme_ioq_poll_period_us": 0, 00:19:25.757 "io_queue_requests": 0, 00:19:25.757 "delay_cmd_submit": true, 00:19:25.757 "transport_retry_count": 4, 00:19:25.757 "bdev_retry_count": 3, 00:19:25.757 "transport_ack_timeout": 0, 00:19:25.757 "ctrlr_loss_timeout_sec": 0, 00:19:25.757 "reconnect_delay_sec": 0, 00:19:25.757 "fast_io_fail_timeout_sec": 0, 00:19:25.757 "disable_auto_failback": false, 00:19:25.757 "generate_uuids": false, 00:19:25.757 "transport_tos": 0, 00:19:25.757 "nvme_error_stat": false, 00:19:25.757 "rdma_srq_size": 0, 00:19:25.757 "io_path_stat": false, 00:19:25.757 "allow_accel_sequence": false, 00:19:25.757 "rdma_max_cq_size": 0, 00:19:25.757 "rdma_cm_event_timeout_ms": 0, 00:19:25.757 "dhchap_digests": [ 00:19:25.757 "sha256", 00:19:25.757 "sha384", 00:19:25.757 "sha512" 00:19:25.757 ], 00:19:25.757 "dhchap_dhgroups": [ 00:19:25.757 "null", 00:19:25.757 "ffdhe2048", 00:19:25.757 "ffdhe3072", 00:19:25.757 "ffdhe4096", 00:19:25.757 "ffdhe6144", 00:19:25.757 "ffdhe8192" 00:19:25.757 ] 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "bdev_nvme_set_hotplug", 00:19:25.757 "params": { 00:19:25.757 "period_us": 100000, 00:19:25.757 "enable": false 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "bdev_malloc_create", 00:19:25.757 "params": { 00:19:25.757 "name": "malloc0", 00:19:25.757 "num_blocks": 8192, 00:19:25.757 "block_size": 4096, 00:19:25.757 "physical_block_size": 4096, 00:19:25.757 "uuid": "94175a09-fb9a-4d0b-9774-0c9a1552d648", 00:19:25.757 "optimal_io_boundary": 0, 00:19:25.757 "md_size": 0, 00:19:25.757 "dif_type": 0, 00:19:25.757 "dif_is_head_of_md": false, 00:19:25.757 "dif_pi_format": 0 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "bdev_wait_for_examine" 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "scsi", 00:19:25.757 "config": null 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "scheduler", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "framework_set_scheduler", 00:19:25.757 "params": { 00:19:25.757 "name": "static" 00:19:25.757 } 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "vhost_scsi", 00:19:25.757 "config": [] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "vhost_blk", 00:19:25.757 "config": [] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "ublk", 00:19:25.757 "config": [ 00:19:25.757 { 00:19:25.757 "method": "ublk_create_target", 00:19:25.757 "params": { 00:19:25.757 "cpumask": "1" 00:19:25.757 } 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "method": "ublk_start_disk", 00:19:25.757 "params": { 00:19:25.757 "bdev_name": "malloc0", 00:19:25.757 "ublk_id": 0, 00:19:25.757 "num_queues": 1, 00:19:25.757 "queue_depth": 128 00:19:25.757 } 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "subsystem": "nbd", 00:19:25.757 "config": [] 00:19:25.757 }, 00:19:25.758 { 00:19:25.758 "subsystem": "nvmf", 00:19:25.758 "config": [ 00:19:25.758 { 00:19:25.758 "method": "nvmf_set_config", 00:19:25.758 "params": { 00:19:25.758 "discovery_filter": "match_any", 00:19:25.758 "admin_cmd_passthru": { 00:19:25.758 "identify_ctrlr": false 00:19:25.758 }, 00:19:25.758 "dhchap_digests": [ 00:19:25.758 "sha256", 00:19:25.758 "sha384", 00:19:25.758 "sha512" 00:19:25.758 ], 00:19:25.758 "dhchap_dhgroups": [ 00:19:25.758 "null", 00:19:25.758 "ffdhe2048", 00:19:25.758 "ffdhe3072", 00:19:25.758 "ffdhe4096", 00:19:25.758 "ffdhe6144", 00:19:25.758 "ffdhe81 10:31:24 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75268 ']' 00:19:25.758 10:31:24 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.758 92" 00:19:25.758 ] 00:19:25.758 } 00:19:25.758 }, 00:19:25.758 { 00:19:25.758 "method": "nvmf_set_max_subsystems", 00:19:25.758 "params": { 00:19:25.758 "max_subsystems": 1024 00:19:25.758 } 00:19:25.758 }, 00:19:25.758 { 00:19:25.758 "method": "nvmf_set_crdt", 00:19:25.758 "params": { 00:19:25.758 "crdt1": 0, 00:19:25.758 "crdt2": 0, 00:19:25.758 "crdt3": 0 00:19:25.758 } 00:19:25.758 } 00:19:25.758 ] 00:19:25.758 }, 00:19:25.758 { 00:19:25.758 "subsystem": "iscsi", 00:19:25.758 "config": [ 00:19:25.758 { 00:19:25.758 "method": "iscsi_set_options", 00:19:25.758 "params": { 00:19:25.758 "node_base": "iqn.2016-06.io.spdk", 00:19:25.758 "max_sessions": 128, 00:19:25.758 "max_connections_per_session": 2, 00:19:25.758 "max_queue_depth": 64, 00:19:25.758 "default_time2wait": 2, 00:19:25.758 "default_time2retain": 20, 00:19:25.758 "first_burst_length": 8192, 00:19:25.758 "immediate_data": true, 00:19:25.758 "allow_duplicated_isid": false, 00:19:25.758 "error_recovery_level": 0, 00:19:25.758 "nop_timeout": 60, 00:19:25.758 "nop_in_interval": 30, 00:19:25.758 "disable_chap": false, 00:19:25.758 "require_chap": false, 00:19:25.758 "mutual_chap": false, 00:19:25.758 "chap_group": 0, 00:19:25.758 "max_large_datain_per_connection": 64, 00:19:25.758 "max_r2t_per_connection": 4, 00:19:25.758 "pdu_pool_size": 36864, 00:19:25.758 "immediate_data_pool_size": 16384, 00:19:25.758 "data_out_pool_size": 2048 00:19:25.758 } 00:19:25.758 } 00:19:25.758 ] 00:19:25.758 } 00:19:25.758 ] 00:19:25.758 }' 00:19:25.758 10:31:24 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.758 10:31:24 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.758 10:31:24 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.758 10:31:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 [2024-12-07 10:31:24.765610] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:25.758 [2024-12-07 10:31:24.765727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75268 ] 00:19:25.758 [2024-12-07 10:31:24.943217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.758 [2024-12-07 10:31:25.051104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.134 [2024-12-07 10:31:26.066995] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:27.134 [2024-12-07 10:31:26.068009] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:27.134 [2024-12-07 10:31:26.075118] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:27.134 [2024-12-07 10:31:26.075205] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:27.134 [2024-12-07 10:31:26.075218] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:27.134 [2024-12-07 10:31:26.075226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:27.134 [2024-12-07 10:31:26.084091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:27.134 [2024-12-07 10:31:26.084115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:27.134 [2024-12-07 10:31:26.091027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:27.134 [2024-12-07 10:31:26.091115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:27.134 [2024-12-07 10:31:26.107999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75268 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75268 ']' 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75268 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75268 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75268' 00:19:27.134 killing process with pid 75268 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75268 00:19:27.134 10:31:26 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75268 00:19:28.513 [2024-12-07 10:31:27.737347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:28.513 [2024-12-07 10:31:27.780069] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:28.513 [2024-12-07 10:31:27.780190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:28.513 [2024-12-07 10:31:27.787011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:28.513 [2024-12-07 10:31:27.787065] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:28.513 [2024-12-07 10:31:27.787074] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:28.513 [2024-12-07 10:31:27.787100] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:28.513 [2024-12-07 10:31:27.787242] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:30.423 10:31:29 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:30.423 00:19:30.423 real 0m10.243s 00:19:30.423 user 0m7.514s 00:19:30.423 sys 0m3.426s 00:19:30.423 ************************************ 00:19:30.423 END TEST test_save_ublk_config 00:19:30.423 ************************************ 00:19:30.423 10:31:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.423 10:31:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:30.423 10:31:29 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75348 00:19:30.423 10:31:29 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:30.423 10:31:29 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.423 10:31:29 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75348 00:19:30.423 10:31:29 ublk -- common/autotest_common.sh@835 -- # '[' -z 75348 ']' 00:19:30.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.423 10:31:29 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.423 10:31:29 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.423 10:31:29 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.423 10:31:29 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.423 10:31:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:30.423 [2024-12-07 10:31:29.756920] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:30.423 [2024-12-07 10:31:29.757295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75348 ] 00:19:30.683 [2024-12-07 10:31:29.941484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:30.942 [2024-12-07 10:31:30.056618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.942 [2024-12-07 10:31:30.056656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.878 10:31:30 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.878 10:31:30 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:31.878 10:31:30 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:31.878 10:31:30 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:31.878 10:31:30 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.878 10:31:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.878 ************************************ 00:19:31.878 START TEST test_create_ublk 00:19:31.878 ************************************ 00:19:31.878 10:31:30 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:31.878 10:31:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:31.878 10:31:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.878 10:31:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.878 [2024-12-07 10:31:30.915015] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:31.878 [2024-12-07 10:31:30.917370] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:31.878 10:31:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.878 10:31:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:31.878 10:31:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:31.878 10:31:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.878 10:31:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.878 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.878 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:31.878 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:31.878 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.878 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.878 [2024-12-07 10:31:31.212157] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:31.878 [2024-12-07 10:31:31.212614] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:31.878 [2024-12-07 10:31:31.212635] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:31.878 [2024-12-07 10:31:31.212644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:31.878 [2024-12-07 10:31:31.216794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:31.878 [2024-12-07 10:31:31.216820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:31.878 [2024-12-07 10:31:31.217993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:31.878 [2024-12-07 10:31:31.218688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:32.137 [2024-12-07 10:31:31.233058] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:32.137 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.137 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:32.137 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:32.138 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.138 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.138 10:31:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:32.138 { 00:19:32.138 "ublk_device": "/dev/ublkb0", 00:19:32.138 "id": 0, 00:19:32.138 "queue_depth": 512, 00:19:32.138 "num_queues": 4, 00:19:32.138 "bdev_name": "Malloc0" 00:19:32.138 } 00:19:32.138 ]' 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:32.138 10:31:31 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:32.397 fio: verification read phase will never start because write phase uses all of runtime 00:19:32.397 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:32.397 fio-3.35 00:19:32.397 Starting 1 process 00:19:42.423 00:19:42.423 fio_test: (groupid=0, jobs=1): err= 0: pid=75400: Sat Dec 7 10:31:41 2024 00:19:42.423 write: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(440MiB/10001msec); 0 zone resets 00:19:42.423 clat (usec): min=37, max=4692, avg=87.93, stdev=119.84 00:19:42.423 lat (usec): min=37, max=4693, avg=88.39, stdev=119.86 00:19:42.423 clat percentiles (usec): 00:19:42.423 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 78], 00:19:42.423 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 85], 00:19:42.423 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 121], 00:19:42.423 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 2540], 99.95th=[ 3130], 00:19:42.423 | 99.99th=[ 3720] 00:19:42.423 bw ( KiB/s): min=38848, max=75680, per=100.00%, avg=45295.16, stdev=9590.51, samples=19 00:19:42.423 iops : min= 9712, max=18920, avg=11323.79, stdev=2397.63, samples=19 00:19:42.423 lat (usec) : 50=4.92%, 100=84.68%, 250=10.12%, 500=0.01%, 750=0.01% 00:19:42.423 lat (usec) : 1000=0.01% 00:19:42.423 lat (msec) : 2=0.08%, 4=0.15%, 10=0.01% 00:19:42.423 cpu : usr=2.13%, sys=8.71%, ctx=112656, majf=0, minf=796 00:19:42.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.423 issued rwts: total=0,112654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.423 00:19:42.423 Run status group 0 (all jobs): 00:19:42.423 WRITE: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=440MiB (461MB), run=10001-10001msec 00:19:42.423 00:19:42.423 Disk stats (read/write): 00:19:42.423 ublkb0: ios=0/111812, merge=0/0, ticks=0/8791, in_queue=8792, util=99.07% 00:19:42.423 10:31:41 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:42.424 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.424 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.424 [2024-12-07 10:31:41.743486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:42.684 [2024-12-07 10:31:41.785618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:42.684 [2024-12-07 10:31:41.786493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:42.684 [2024-12-07 10:31:41.802868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:42.684 [2024-12-07 10:31:41.803328] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:42.684 [2024-12-07 10:31:41.803352] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.684 10:31:41 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.684 [2024-12-07 10:31:41.820146] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:42.684 request: 00:19:42.684 { 00:19:42.684 "ublk_id": 0, 00:19:42.684 "method": "ublk_stop_disk", 00:19:42.684 "req_id": 1 00:19:42.684 } 00:19:42.684 Got JSON-RPC error response 00:19:42.684 response: 00:19:42.684 { 00:19:42.684 "code": -19, 00:19:42.684 "message": "No such device" 00:19:42.684 } 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.684 10:31:41 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.684 [2024-12-07 10:31:41.839146] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:42.684 [2024-12-07 10:31:41.846559] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:42.684 [2024-12-07 10:31:41.846597] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:42.684 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.685 10:31:41 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:42.685 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.685 10:31:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.254 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.254 10:31:42 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:43.254 10:31:42 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:43.254 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.254 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.254 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.254 10:31:42 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:43.254 10:31:42 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:43.514 10:31:42 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:43.514 10:31:42 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:43.514 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.514 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.514 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.514 10:31:42 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:43.514 10:31:42 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:43.514 ************************************ 00:19:43.514 END TEST test_create_ublk 00:19:43.514 ************************************ 00:19:43.514 10:31:42 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:43.514 00:19:43.514 real 0m11.759s 00:19:43.514 user 0m0.595s 00:19:43.514 sys 0m1.008s 00:19:43.514 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.514 10:31:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.514 10:31:42 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:43.514 10:31:42 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:43.514 10:31:42 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.514 10:31:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.514 ************************************ 00:19:43.514 START TEST test_create_multi_ublk 00:19:43.514 ************************************ 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.514 [2024-12-07 10:31:42.752005] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:43.514 [2024-12-07 10:31:42.754569] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.514 10:31:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:43.515 10:31:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.515 10:31:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.774 [2024-12-07 10:31:43.038150] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:43.774 [2024-12-07 10:31:43.038626] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:43.774 [2024-12-07 10:31:43.038645] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:43.774 [2024-12-07 10:31:43.038658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:43.774 [2024-12-07 10:31:43.047377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:43.774 [2024-12-07 10:31:43.047406] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:43.774 [2024-12-07 10:31:43.052040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:43.774 [2024-12-07 10:31:43.052634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:43.774 [2024-12-07 10:31:43.075056] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.774 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.034 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.034 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:44.034 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:44.034 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.034 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.034 [2024-12-07 10:31:43.371194] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:44.034 [2024-12-07 10:31:43.371639] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:44.034 [2024-12-07 10:31:43.371653] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:44.034 [2024-12-07 10:31:43.371661] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:44.293 [2024-12-07 10:31:43.385258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:44.294 [2024-12-07 10:31:43.385293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:44.294 [2024-12-07 10:31:43.395074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:44.294 [2024-12-07 10:31:43.395649] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:44.294 [2024-12-07 10:31:43.414614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:44.294 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.294 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:44.294 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:44.294 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:44.294 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.294 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.554 [2024-12-07 10:31:43.692157] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:44.554 [2024-12-07 10:31:43.692616] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:44.554 [2024-12-07 10:31:43.692627] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:44.554 [2024-12-07 10:31:43.692638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:44.554 [2024-12-07 10:31:43.701396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:44.554 [2024-12-07 10:31:43.701423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:44.554 [2024-12-07 10:31:43.706004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:44.554 [2024-12-07 10:31:43.706646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:44.554 [2024-12-07 10:31:43.722152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.554 10:31:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.820 [2024-12-07 10:31:44.016166] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:44.820 [2024-12-07 10:31:44.016598] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:44.820 [2024-12-07 10:31:44.016618] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:44.820 [2024-12-07 10:31:44.016627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:44.820 [2024-12-07 10:31:44.032040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:44.820 [2024-12-07 10:31:44.032067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:44.820 [2024-12-07 10:31:44.040060] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:44.820 [2024-12-07 10:31:44.040633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:44.820 [2024-12-07 10:31:44.064053] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:44.820 { 00:19:44.820 "ublk_device": "/dev/ublkb0", 00:19:44.820 "id": 0, 00:19:44.820 "queue_depth": 512, 00:19:44.820 "num_queues": 4, 00:19:44.820 "bdev_name": "Malloc0" 00:19:44.820 }, 00:19:44.820 { 00:19:44.820 "ublk_device": "/dev/ublkb1", 00:19:44.820 "id": 1, 00:19:44.820 "queue_depth": 512, 00:19:44.820 "num_queues": 4, 00:19:44.820 "bdev_name": "Malloc1" 00:19:44.820 }, 00:19:44.820 { 00:19:44.820 "ublk_device": "/dev/ublkb2", 00:19:44.820 "id": 2, 00:19:44.820 "queue_depth": 512, 00:19:44.820 "num_queues": 4, 00:19:44.820 "bdev_name": "Malloc2" 00:19:44.820 }, 00:19:44.820 { 00:19:44.820 "ublk_device": "/dev/ublkb3", 00:19:44.820 "id": 3, 00:19:44.820 "queue_depth": 512, 00:19:44.820 "num_queues": 4, 00:19:44.820 "bdev_name": "Malloc3" 00:19:44.820 } 00:19:44.820 ]' 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:44.820 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:45.080 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:45.339 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.599 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.599 [2024-12-07 10:31:44.906106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:45.599 [2024-12-07 10:31:44.946659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:45.599 [2024-12-07 10:31:44.948449] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:45.859 [2024-12-07 10:31:44.954039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:45.859 [2024-12-07 10:31:44.954404] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:45.859 [2024-12-07 10:31:44.954423] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:45.859 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.859 10:31:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:45.859 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 10:31:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 [2024-12-07 10:31:44.970143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:45.859 [2024-12-07 10:31:45.003512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:45.859 [2024-12-07 10:31:45.005367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:45.859 [2024-12-07 10:31:45.010056] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:45.859 [2024-12-07 10:31:45.010397] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:45.859 [2024-12-07 10:31:45.010416] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 [2024-12-07 10:31:45.026108] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:45.859 [2024-12-07 10:31:45.063727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:45.859 [2024-12-07 10:31:45.065184] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:45.859 [2024-12-07 10:31:45.070253] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:45.859 [2024-12-07 10:31:45.070599] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:45.859 [2024-12-07 10:31:45.070613] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 [2024-12-07 10:31:45.089092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:45.859 [2024-12-07 10:31:45.129674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:45.859 [2024-12-07 10:31:45.130442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:45.859 [2024-12-07 10:31:45.134136] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:45.859 [2024-12-07 10:31:45.134470] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:45.859 [2024-12-07 10:31:45.134483] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:46.118 [2024-12-07 10:31:45.339129] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:46.118 [2024-12-07 10:31:45.347960] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:46.118 [2024-12-07 10:31:45.348004] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:46.118 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:46.118 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:46.118 10:31:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:46.118 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.118 10:31:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:47.055 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.055 10:31:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:47.055 10:31:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:47.055 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.055 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:47.314 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.314 10:31:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:47.314 10:31:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:47.314 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.314 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:47.574 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.574 10:31:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:47.574 10:31:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:47.574 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.574 10:31:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:47.834 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:48.094 ************************************ 00:19:48.094 END TEST test_create_multi_ublk 00:19:48.094 ************************************ 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:48.094 00:19:48.094 real 0m4.524s 00:19:48.094 user 0m0.996s 00:19:48.094 sys 0m0.206s 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.094 10:31:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:48.094 10:31:47 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:48.094 10:31:47 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:48.094 10:31:47 ublk -- ublk/ublk.sh@130 -- # killprocess 75348 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@954 -- # '[' -z 75348 ']' 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@958 -- # kill -0 75348 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@959 -- # uname 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75348 00:19:48.094 killing process with pid 75348 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75348' 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@973 -- # kill 75348 00:19:48.094 10:31:47 ublk -- common/autotest_common.sh@978 -- # wait 75348 00:19:49.473 [2024-12-07 10:31:48.461200] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:49.473 [2024-12-07 10:31:48.461267] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:50.411 00:19:50.411 real 0m30.628s 00:19:50.411 user 0m43.328s 00:19:50.411 sys 0m10.417s 00:19:50.411 10:31:49 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.411 10:31:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:50.411 ************************************ 00:19:50.411 END TEST ublk 00:19:50.411 ************************************ 00:19:50.411 10:31:49 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:50.411 10:31:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:50.411 10:31:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.411 10:31:49 -- common/autotest_common.sh@10 -- # set +x 00:19:50.411 ************************************ 00:19:50.411 START TEST ublk_recovery 00:19:50.411 ************************************ 00:19:50.411 10:31:49 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:50.671 * Looking for test storage... 00:19:50.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.671 10:31:49 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:50.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.671 --rc genhtml_branch_coverage=1 00:19:50.671 --rc genhtml_function_coverage=1 00:19:50.671 --rc genhtml_legend=1 00:19:50.671 --rc geninfo_all_blocks=1 00:19:50.671 --rc geninfo_unexecuted_blocks=1 00:19:50.671 00:19:50.671 ' 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:50.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.671 --rc genhtml_branch_coverage=1 00:19:50.671 --rc genhtml_function_coverage=1 00:19:50.671 --rc genhtml_legend=1 00:19:50.671 --rc geninfo_all_blocks=1 00:19:50.671 --rc geninfo_unexecuted_blocks=1 00:19:50.671 00:19:50.671 ' 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:50.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.671 --rc genhtml_branch_coverage=1 00:19:50.671 --rc genhtml_function_coverage=1 00:19:50.671 --rc genhtml_legend=1 00:19:50.671 --rc geninfo_all_blocks=1 00:19:50.671 --rc geninfo_unexecuted_blocks=1 00:19:50.671 00:19:50.671 ' 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:50.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.671 --rc genhtml_branch_coverage=1 00:19:50.671 --rc genhtml_function_coverage=1 00:19:50.671 --rc genhtml_legend=1 00:19:50.671 --rc geninfo_all_blocks=1 00:19:50.671 --rc geninfo_unexecuted_blocks=1 00:19:50.671 00:19:50.671 ' 00:19:50.671 10:31:49 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:50.671 10:31:49 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:50.671 10:31:49 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:50.671 10:31:49 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75773 00:19:50.671 10:31:49 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:50.671 10:31:49 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.671 10:31:49 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75773 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75773 ']' 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.671 10:31:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.931 [2024-12-07 10:31:50.103318] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:50.931 [2024-12-07 10:31:50.103642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75773 ] 00:19:51.190 [2024-12-07 10:31:50.286499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.190 [2024-12-07 10:31:50.389992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.190 [2024-12-07 10:31:50.390057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:52.129 10:31:51 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.129 [2024-12-07 10:31:51.256026] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:52.129 [2024-12-07 10:31:51.258572] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.129 10:31:51 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.129 malloc0 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.129 10:31:51 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.129 [2024-12-07 10:31:51.399193] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:52.129 [2024-12-07 10:31:51.399309] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:52.129 [2024-12-07 10:31:51.399324] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:52.129 [2024-12-07 10:31:51.399333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:52.129 [2024-12-07 10:31:51.408154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:52.129 [2024-12-07 10:31:51.408177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:52.129 [2024-12-07 10:31:51.415038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:52.129 [2024-12-07 10:31:51.415183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:52.129 [2024-12-07 10:31:51.431040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:52.129 1 00:19:52.129 10:31:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.129 10:31:51 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:53.505 10:31:52 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75813 00:19:53.505 10:31:52 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:53.505 10:31:52 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:53.505 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:53.505 fio-3.35 00:19:53.505 Starting 1 process 00:19:58.778 10:31:57 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75773 00:19:58.778 10:31:57 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:04.059 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75773 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:04.059 10:32:02 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75919 00:20:04.059 10:32:02 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:04.059 10:32:02 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.059 10:32:02 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75919 00:20:04.059 10:32:02 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75919 ']' 00:20:04.059 10:32:02 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.059 10:32:02 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.059 10:32:02 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.059 10:32:02 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.059 10:32:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.059 [2024-12-07 10:32:02.568873] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:04.059 [2024-12-07 10:32:02.569001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75919 ] 00:20:04.059 [2024-12-07 10:32:02.751646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:04.059 [2024-12-07 10:32:02.866741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.059 [2024-12-07 10:32:02.866777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:04.630 10:32:03 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 [2024-12-07 10:32:03.691018] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:04.630 [2024-12-07 10:32:03.693558] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.630 10:32:03 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 malloc0 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.630 10:32:03 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 [2024-12-07 10:32:03.833483] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:04.630 [2024-12-07 10:32:03.833526] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:04.630 [2024-12-07 10:32:03.833538] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:04.630 [2024-12-07 10:32:03.840072] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:04.630 [2024-12-07 10:32:03.840092] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:04.630 1 00:20:04.630 10:32:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.630 10:32:03 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75813 00:20:05.568 [2024-12-07 10:32:04.838512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:05.568 [2024-12-07 10:32:04.845020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:05.568 [2024-12-07 10:32:04.845037] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:06.504 [2024-12-07 10:32:05.843457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:06.504 [2024-12-07 10:32:05.849007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:06.504 [2024-12-07 10:32:05.849029] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:07.882 [2024-12-07 10:32:06.847443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:07.882 [2024-12-07 10:32:06.851060] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:07.882 [2024-12-07 10:32:06.851070] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:07.882 [2024-12-07 10:32:06.851082] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:07.882 [2024-12-07 10:32:06.851178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:29.902 [2024-12-07 10:32:27.630089] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:29.902 [2024-12-07 10:32:27.635447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:29.902 [2024-12-07 10:32:27.640417] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:29.902 [2024-12-07 10:32:27.640440] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:56.453 00:20:56.453 fio_test: (groupid=0, jobs=1): err= 0: pid=75817: Sat Dec 7 10:32:52 2024 00:20:56.453 read: IOPS=9887, BW=38.6MiB/s (40.5MB/s)(2318MiB/60002msec) 00:20:56.453 slat (usec): min=3, max=1221, avg= 9.46, stdev= 3.15 00:20:56.453 clat (usec): min=1500, max=30197k, avg=6309.75, stdev=308628.30 00:20:56.453 lat (usec): min=1510, max=30197k, avg=6319.20, stdev=308628.31 00:20:56.453 clat percentiles (msec): 00:20:56.453 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:20:56.453 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:20:56.453 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:20:56.453 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 10], 99.95th=[ 10], 00:20:56.453 | 99.99th=[17113] 00:20:56.453 bw ( KiB/s): min= 5312, max=83417, per=100.00%, avg=77923.47, stdev=12643.50, samples=60 00:20:56.453 iops : min= 1328, max=20854, avg=19480.83, stdev=3160.87, samples=60 00:20:56.453 write: IOPS=9877, BW=38.6MiB/s (40.5MB/s)(2315MiB/60002msec); 0 zone resets 00:20:56.453 slat (usec): min=3, max=407, avg= 9.45, stdev= 2.65 00:20:56.453 clat (usec): min=1472, max=30197k, avg=6621.32, stdev=318589.49 00:20:56.453 lat (usec): min=1482, max=30197k, avg=6630.77, stdev=318589.49 00:20:56.453 clat percentiles (msec): 00:20:56.453 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:20:56.453 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:20:56.453 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:20:56.453 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 10], 99.95th=[ 10], 00:20:56.453 | 99.99th=[17113] 00:20:56.453 bw ( KiB/s): min= 5800, max=83776, per=100.00%, avg=77850.57, stdev=12602.47, samples=60 00:20:56.453 iops : min= 1450, max=20944, avg=19462.60, stdev=3150.60, samples=60 00:20:56.453 lat (msec) : 2=0.02%, 4=94.05%, 10=5.89%, 20=0.02%, >=2000=0.01% 00:20:56.453 cpu : usr=6.92%, sys=18.63%, ctx=54699, majf=0, minf=13 00:20:56.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:56.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.453 issued rwts: total=593288,592693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.453 00:20:56.453 Run status group 0 (all jobs): 00:20:56.453 READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=2318MiB (2430MB), run=60002-60002msec 00:20:56.453 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=2315MiB (2428MB), run=60002-60002msec 00:20:56.453 00:20:56.453 Disk stats (read/write): 00:20:56.453 ublkb1: ios=590956/590419, merge=0/0, ticks=3676423/3781398, in_queue=7457821, util=99.96% 00:20:56.453 10:32:52 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.453 [2024-12-07 10:32:52.722995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:56.453 [2024-12-07 10:32:52.758163] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:56.453 [2024-12-07 10:32:52.758376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:56.453 [2024-12-07 10:32:52.766038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:56.453 [2024-12-07 10:32:52.766246] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:56.453 [2024-12-07 10:32:52.766258] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.453 10:32:52 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.453 [2024-12-07 10:32:52.782109] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:56.453 [2024-12-07 10:32:52.790961] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:56.453 [2024-12-07 10:32:52.791007] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.453 10:32:52 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:56.453 10:32:52 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:56.453 10:32:52 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75919 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75919 ']' 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75919 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75919 00:20:56.453 killing process with pid 75919 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75919' 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75919 00:20:56.453 10:32:52 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75919 00:20:56.453 [2024-12-07 10:32:54.396485] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:56.453 [2024-12-07 10:32:54.396558] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:56.453 00:20:56.453 real 1m6.021s 00:20:56.453 user 1m53.015s 00:20:56.453 sys 0m23.451s 00:20:56.453 10:32:55 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.453 10:32:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.453 ************************************ 00:20:56.453 END TEST ublk_recovery 00:20:56.453 ************************************ 00:20:56.712 10:32:55 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:56.712 10:32:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:56.712 10:32:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.712 10:32:55 -- common/autotest_common.sh@10 -- # set +x 00:20:56.712 10:32:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:56.712 10:32:55 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:56.712 10:32:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:56.712 10:32:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.712 10:32:55 -- common/autotest_common.sh@10 -- # set +x 00:20:56.712 ************************************ 00:20:56.712 START TEST ftl 00:20:56.712 ************************************ 00:20:56.712 10:32:55 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:56.712 * Looking for test storage... 00:20:56.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.712 10:32:56 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:56.712 10:32:56 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:20:56.712 10:32:56 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:56.971 10:32:56 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:56.971 10:32:56 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.971 10:32:56 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.971 10:32:56 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.971 10:32:56 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.971 10:32:56 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.971 10:32:56 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.971 10:32:56 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.971 10:32:56 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.971 10:32:56 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.971 10:32:56 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.972 10:32:56 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.972 10:32:56 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:56.972 10:32:56 ftl -- scripts/common.sh@345 -- # : 1 00:20:56.972 10:32:56 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.972 10:32:56 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.972 10:32:56 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:56.972 10:32:56 ftl -- scripts/common.sh@353 -- # local d=1 00:20:56.972 10:32:56 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.972 10:32:56 ftl -- scripts/common.sh@355 -- # echo 1 00:20:56.972 10:32:56 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.972 10:32:56 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:56.972 10:32:56 ftl -- scripts/common.sh@353 -- # local d=2 00:20:56.972 10:32:56 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.972 10:32:56 ftl -- scripts/common.sh@355 -- # echo 2 00:20:56.972 10:32:56 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.972 10:32:56 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.972 10:32:56 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.972 10:32:56 ftl -- scripts/common.sh@368 -- # return 0 00:20:56.972 10:32:56 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.972 10:32:56 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:56.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.972 --rc genhtml_branch_coverage=1 00:20:56.972 --rc genhtml_function_coverage=1 00:20:56.972 --rc genhtml_legend=1 00:20:56.972 --rc geninfo_all_blocks=1 00:20:56.972 --rc geninfo_unexecuted_blocks=1 00:20:56.972 00:20:56.972 ' 00:20:56.972 10:32:56 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:56.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.972 --rc genhtml_branch_coverage=1 00:20:56.972 --rc genhtml_function_coverage=1 00:20:56.972 --rc genhtml_legend=1 00:20:56.972 --rc geninfo_all_blocks=1 00:20:56.972 --rc geninfo_unexecuted_blocks=1 00:20:56.972 00:20:56.972 ' 00:20:56.972 10:32:56 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:56.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.972 --rc genhtml_branch_coverage=1 00:20:56.972 --rc genhtml_function_coverage=1 00:20:56.972 --rc genhtml_legend=1 00:20:56.972 --rc geninfo_all_blocks=1 00:20:56.972 --rc geninfo_unexecuted_blocks=1 00:20:56.972 00:20:56.972 ' 00:20:56.972 10:32:56 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:56.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.972 --rc genhtml_branch_coverage=1 00:20:56.972 --rc genhtml_function_coverage=1 00:20:56.972 --rc genhtml_legend=1 00:20:56.972 --rc geninfo_all_blocks=1 00:20:56.972 --rc geninfo_unexecuted_blocks=1 00:20:56.972 00:20:56.972 ' 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:56.972 10:32:56 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:56.972 10:32:56 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.972 10:32:56 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.972 10:32:56 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:56.972 10:32:56 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:56.972 10:32:56 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.972 10:32:56 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:56.972 10:32:56 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:56.972 10:32:56 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.972 10:32:56 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.972 10:32:56 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:56.972 10:32:56 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:56.972 10:32:56 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.972 10:32:56 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.972 10:32:56 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:56.972 10:32:56 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:56.972 10:32:56 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.972 10:32:56 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.972 10:32:56 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:56.972 10:32:56 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:56.972 10:32:56 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.972 10:32:56 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.972 10:32:56 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.972 10:32:56 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.972 10:32:56 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:56.972 10:32:56 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:56.972 10:32:56 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.972 10:32:56 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:56.972 10:32:56 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:57.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.797 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:57.797 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:57.797 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:57.797 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:57.797 10:32:57 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76725 00:20:57.797 10:32:57 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76725 00:20:57.797 10:32:57 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:57.797 10:32:57 ftl -- common/autotest_common.sh@835 -- # '[' -z 76725 ']' 00:20:57.797 10:32:57 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.797 10:32:57 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.797 10:32:57 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.797 10:32:57 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.797 10:32:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:58.055 [2024-12-07 10:32:57.176764] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:58.055 [2024-12-07 10:32:57.176888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76725 ] 00:20:58.055 [2024-12-07 10:32:57.358211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.313 [2024-12-07 10:32:57.472195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.882 10:32:58 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.882 10:32:58 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:58.882 10:32:58 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:58.882 10:32:58 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:00.263 10:32:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:00.263 10:32:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:00.523 10:32:59 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:00.523 10:32:59 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:00.523 10:32:59 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@50 -- # break 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:00.783 10:32:59 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:00.783 10:33:00 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:00.783 10:33:00 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:00.783 10:33:00 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:00.783 10:33:00 ftl -- ftl/ftl.sh@63 -- # break 00:21:00.783 10:33:00 ftl -- ftl/ftl.sh@66 -- # killprocess 76725 00:21:00.783 10:33:00 ftl -- common/autotest_common.sh@954 -- # '[' -z 76725 ']' 00:21:00.783 10:33:00 ftl -- common/autotest_common.sh@958 -- # kill -0 76725 00:21:00.783 10:33:00 ftl -- common/autotest_common.sh@959 -- # uname 00:21:00.783 10:33:00 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.783 10:33:00 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76725 00:21:01.043 10:33:00 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.043 10:33:00 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.043 killing process with pid 76725 00:21:01.043 10:33:00 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76725' 00:21:01.043 10:33:00 ftl -- common/autotest_common.sh@973 -- # kill 76725 00:21:01.043 10:33:00 ftl -- common/autotest_common.sh@978 -- # wait 76725 00:21:03.583 10:33:02 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:03.583 10:33:02 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:03.583 10:33:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:03.583 10:33:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.583 10:33:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:03.583 ************************************ 00:21:03.583 START TEST ftl_fio_basic 00:21:03.583 ************************************ 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:03.583 * Looking for test storage... 00:21:03.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:03.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.583 --rc genhtml_branch_coverage=1 00:21:03.583 --rc genhtml_function_coverage=1 00:21:03.583 --rc genhtml_legend=1 00:21:03.583 --rc geninfo_all_blocks=1 00:21:03.583 --rc geninfo_unexecuted_blocks=1 00:21:03.583 00:21:03.583 ' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:03.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.583 --rc genhtml_branch_coverage=1 00:21:03.583 --rc genhtml_function_coverage=1 00:21:03.583 --rc genhtml_legend=1 00:21:03.583 --rc geninfo_all_blocks=1 00:21:03.583 --rc geninfo_unexecuted_blocks=1 00:21:03.583 00:21:03.583 ' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:03.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.583 --rc genhtml_branch_coverage=1 00:21:03.583 --rc genhtml_function_coverage=1 00:21:03.583 --rc genhtml_legend=1 00:21:03.583 --rc geninfo_all_blocks=1 00:21:03.583 --rc geninfo_unexecuted_blocks=1 00:21:03.583 00:21:03.583 ' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:03.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.583 --rc genhtml_branch_coverage=1 00:21:03.583 --rc genhtml_function_coverage=1 00:21:03.583 --rc genhtml_legend=1 00:21:03.583 --rc geninfo_all_blocks=1 00:21:03.583 --rc geninfo_unexecuted_blocks=1 00:21:03.583 00:21:03.583 ' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76874 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76874 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76874 ']' 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.583 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.584 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.584 10:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:03.584 [2024-12-07 10:33:02.849847] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:21:03.584 [2024-12-07 10:33:02.850003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76874 ] 00:21:03.843 [2024-12-07 10:33:03.039280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:03.843 [2024-12-07 10:33:03.157942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.843 [2024-12-07 10:33:03.158096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.843 [2024-12-07 10:33:03.158147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.779 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.779 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:04.780 10:33:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:04.780 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:04.780 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:04.780 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:04.780 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:04.780 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:05.037 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:05.295 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.295 { 00:21:05.295 "name": "nvme0n1", 00:21:05.295 "aliases": [ 00:21:05.295 "7fa173d2-03d2-42d0-84f1-5e0b2c6e10df" 00:21:05.295 ], 00:21:05.295 "product_name": "NVMe disk", 00:21:05.295 "block_size": 4096, 00:21:05.295 "num_blocks": 1310720, 00:21:05.295 "uuid": "7fa173d2-03d2-42d0-84f1-5e0b2c6e10df", 00:21:05.295 "numa_id": -1, 00:21:05.295 "assigned_rate_limits": { 00:21:05.295 "rw_ios_per_sec": 0, 00:21:05.295 "rw_mbytes_per_sec": 0, 00:21:05.295 "r_mbytes_per_sec": 0, 00:21:05.295 "w_mbytes_per_sec": 0 00:21:05.295 }, 00:21:05.295 "claimed": false, 00:21:05.295 "zoned": false, 00:21:05.295 "supported_io_types": { 00:21:05.295 "read": true, 00:21:05.295 "write": true, 00:21:05.295 "unmap": true, 00:21:05.295 "flush": true, 00:21:05.295 "reset": true, 00:21:05.295 "nvme_admin": true, 00:21:05.295 "nvme_io": true, 00:21:05.295 "nvme_io_md": false, 00:21:05.296 "write_zeroes": true, 00:21:05.296 "zcopy": false, 00:21:05.296 "get_zone_info": false, 00:21:05.296 "zone_management": false, 00:21:05.296 "zone_append": false, 00:21:05.296 "compare": true, 00:21:05.296 "compare_and_write": false, 00:21:05.296 "abort": true, 00:21:05.296 "seek_hole": false, 00:21:05.296 "seek_data": false, 00:21:05.296 "copy": true, 00:21:05.296 "nvme_iov_md": false 00:21:05.296 }, 00:21:05.296 "driver_specific": { 00:21:05.296 "nvme": [ 00:21:05.296 { 00:21:05.296 "pci_address": "0000:00:11.0", 00:21:05.296 "trid": { 00:21:05.296 "trtype": "PCIe", 00:21:05.296 "traddr": "0000:00:11.0" 00:21:05.296 }, 00:21:05.296 "ctrlr_data": { 00:21:05.296 "cntlid": 0, 00:21:05.296 "vendor_id": "0x1b36", 00:21:05.296 "model_number": "QEMU NVMe Ctrl", 00:21:05.296 "serial_number": "12341", 00:21:05.296 "firmware_revision": "8.0.0", 00:21:05.296 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:05.296 "oacs": { 00:21:05.296 "security": 0, 00:21:05.296 "format": 1, 00:21:05.296 "firmware": 0, 00:21:05.296 "ns_manage": 1 00:21:05.296 }, 00:21:05.296 "multi_ctrlr": false, 00:21:05.296 "ana_reporting": false 00:21:05.296 }, 00:21:05.296 "vs": { 00:21:05.296 "nvme_version": "1.4" 00:21:05.296 }, 00:21:05.296 "ns_data": { 00:21:05.296 "id": 1, 00:21:05.296 "can_share": false 00:21:05.296 } 00:21:05.296 } 00:21:05.296 ], 00:21:05.296 "mp_policy": "active_passive" 00:21:05.296 } 00:21:05.296 } 00:21:05.296 ]' 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:05.296 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:05.554 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:05.554 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:05.813 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a21e2941-9c0b-4ea0-baba-2445ec0950b0 00:21:05.813 10:33:04 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a21e2941-9c0b-4ea0-baba-2445ec0950b0 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:06.072 { 00:21:06.072 "name": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:06.072 "aliases": [ 00:21:06.072 "lvs/nvme0n1p0" 00:21:06.072 ], 00:21:06.072 "product_name": "Logical Volume", 00:21:06.072 "block_size": 4096, 00:21:06.072 "num_blocks": 26476544, 00:21:06.072 "uuid": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:06.072 "assigned_rate_limits": { 00:21:06.072 "rw_ios_per_sec": 0, 00:21:06.072 "rw_mbytes_per_sec": 0, 00:21:06.072 "r_mbytes_per_sec": 0, 00:21:06.072 "w_mbytes_per_sec": 0 00:21:06.072 }, 00:21:06.072 "claimed": false, 00:21:06.072 "zoned": false, 00:21:06.072 "supported_io_types": { 00:21:06.072 "read": true, 00:21:06.072 "write": true, 00:21:06.072 "unmap": true, 00:21:06.072 "flush": false, 00:21:06.072 "reset": true, 00:21:06.072 "nvme_admin": false, 00:21:06.072 "nvme_io": false, 00:21:06.072 "nvme_io_md": false, 00:21:06.072 "write_zeroes": true, 00:21:06.072 "zcopy": false, 00:21:06.072 "get_zone_info": false, 00:21:06.072 "zone_management": false, 00:21:06.072 "zone_append": false, 00:21:06.072 "compare": false, 00:21:06.072 "compare_and_write": false, 00:21:06.072 "abort": false, 00:21:06.072 "seek_hole": true, 00:21:06.072 "seek_data": true, 00:21:06.072 "copy": false, 00:21:06.072 "nvme_iov_md": false 00:21:06.072 }, 00:21:06.072 "driver_specific": { 00:21:06.072 "lvol": { 00:21:06.072 "lvol_store_uuid": "a21e2941-9c0b-4ea0-baba-2445ec0950b0", 00:21:06.072 "base_bdev": "nvme0n1", 00:21:06.072 "thin_provision": true, 00:21:06.072 "num_allocated_clusters": 0, 00:21:06.072 "snapshot": false, 00:21:06.072 "clone": false, 00:21:06.072 "esnap_clone": false 00:21:06.072 } 00:21:06.072 } 00:21:06.072 } 00:21:06.072 ]' 00:21:06.072 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:06.331 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:06.589 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be3cc213-bca4-43fd-907e-2f75ca909745 00:21:06.848 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:06.848 { 00:21:06.848 "name": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:06.848 "aliases": [ 00:21:06.848 "lvs/nvme0n1p0" 00:21:06.848 ], 00:21:06.848 "product_name": "Logical Volume", 00:21:06.848 "block_size": 4096, 00:21:06.848 "num_blocks": 26476544, 00:21:06.848 "uuid": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:06.848 "assigned_rate_limits": { 00:21:06.848 "rw_ios_per_sec": 0, 00:21:06.848 "rw_mbytes_per_sec": 0, 00:21:06.848 "r_mbytes_per_sec": 0, 00:21:06.848 "w_mbytes_per_sec": 0 00:21:06.848 }, 00:21:06.848 "claimed": false, 00:21:06.848 "zoned": false, 00:21:06.848 "supported_io_types": { 00:21:06.848 "read": true, 00:21:06.848 "write": true, 00:21:06.848 "unmap": true, 00:21:06.848 "flush": false, 00:21:06.848 "reset": true, 00:21:06.848 "nvme_admin": false, 00:21:06.848 "nvme_io": false, 00:21:06.848 "nvme_io_md": false, 00:21:06.848 "write_zeroes": true, 00:21:06.848 "zcopy": false, 00:21:06.848 "get_zone_info": false, 00:21:06.848 "zone_management": false, 00:21:06.848 "zone_append": false, 00:21:06.848 "compare": false, 00:21:06.848 "compare_and_write": false, 00:21:06.848 "abort": false, 00:21:06.848 "seek_hole": true, 00:21:06.848 "seek_data": true, 00:21:06.848 "copy": false, 00:21:06.848 "nvme_iov_md": false 00:21:06.848 }, 00:21:06.848 "driver_specific": { 00:21:06.848 "lvol": { 00:21:06.848 "lvol_store_uuid": "a21e2941-9c0b-4ea0-baba-2445ec0950b0", 00:21:06.848 "base_bdev": "nvme0n1", 00:21:06.848 "thin_provision": true, 00:21:06.848 "num_allocated_clusters": 0, 00:21:06.848 "snapshot": false, 00:21:06.848 "clone": false, 00:21:06.848 "esnap_clone": false 00:21:06.848 } 00:21:06.848 } 00:21:06.848 } 00:21:06.848 ]' 00:21:06.848 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:06.848 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:06.848 10:33:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:06.848 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:06.848 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:06.848 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:06.848 10:33:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:06.848 10:33:06 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:07.107 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size be3cc213-bca4-43fd-907e-2f75ca909745 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=be3cc213-bca4-43fd-907e-2f75ca909745 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be3cc213-bca4-43fd-907e-2f75ca909745 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.107 { 00:21:07.107 "name": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:07.107 "aliases": [ 00:21:07.107 "lvs/nvme0n1p0" 00:21:07.107 ], 00:21:07.107 "product_name": "Logical Volume", 00:21:07.107 "block_size": 4096, 00:21:07.107 "num_blocks": 26476544, 00:21:07.107 "uuid": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:07.107 "assigned_rate_limits": { 00:21:07.107 "rw_ios_per_sec": 0, 00:21:07.107 "rw_mbytes_per_sec": 0, 00:21:07.107 "r_mbytes_per_sec": 0, 00:21:07.107 "w_mbytes_per_sec": 0 00:21:07.107 }, 00:21:07.107 "claimed": false, 00:21:07.107 "zoned": false, 00:21:07.107 "supported_io_types": { 00:21:07.107 "read": true, 00:21:07.107 "write": true, 00:21:07.107 "unmap": true, 00:21:07.107 "flush": false, 00:21:07.107 "reset": true, 00:21:07.107 "nvme_admin": false, 00:21:07.107 "nvme_io": false, 00:21:07.107 "nvme_io_md": false, 00:21:07.107 "write_zeroes": true, 00:21:07.107 "zcopy": false, 00:21:07.107 "get_zone_info": false, 00:21:07.107 "zone_management": false, 00:21:07.107 "zone_append": false, 00:21:07.107 "compare": false, 00:21:07.107 "compare_and_write": false, 00:21:07.107 "abort": false, 00:21:07.107 "seek_hole": true, 00:21:07.107 "seek_data": true, 00:21:07.107 "copy": false, 00:21:07.107 "nvme_iov_md": false 00:21:07.107 }, 00:21:07.107 "driver_specific": { 00:21:07.107 "lvol": { 00:21:07.107 "lvol_store_uuid": "a21e2941-9c0b-4ea0-baba-2445ec0950b0", 00:21:07.107 "base_bdev": "nvme0n1", 00:21:07.107 "thin_provision": true, 00:21:07.107 "num_allocated_clusters": 0, 00:21:07.107 "snapshot": false, 00:21:07.107 "clone": false, 00:21:07.107 "esnap_clone": false 00:21:07.107 } 00:21:07.107 } 00:21:07.107 } 00:21:07.107 ]' 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.107 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.367 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:07.367 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:07.367 10:33:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:07.367 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:07.367 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:07.367 10:33:06 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d be3cc213-bca4-43fd-907e-2f75ca909745 -c nvc0n1p0 --l2p_dram_limit 60 00:21:07.367 [2024-12-07 10:33:06.678797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.367 [2024-12-07 10:33:06.678966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.367 [2024-12-07 10:33:06.679010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:07.367 [2024-12-07 10:33:06.679022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.367 [2024-12-07 10:33:06.679137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.367 [2024-12-07 10:33:06.679155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.367 [2024-12-07 10:33:06.679171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:07.367 [2024-12-07 10:33:06.679183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.367 [2024-12-07 10:33:06.679272] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.367 [2024-12-07 10:33:06.680375] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.367 [2024-12-07 10:33:06.680414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.367 [2024-12-07 10:33:06.680427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.367 [2024-12-07 10:33:06.680441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:21:07.367 [2024-12-07 10:33:06.680453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.367 [2024-12-07 10:33:06.680573] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 11d39fcd-81c9-4668-94dd-5caeb95478c4 00:21:07.367 [2024-12-07 10:33:06.682112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.367 [2024-12-07 10:33:06.682149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:07.368 [2024-12-07 10:33:06.682163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:07.368 [2024-12-07 10:33:06.682177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.689711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.689849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.368 [2024-12-07 10:33:06.689871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.356 ms 00:21:07.368 [2024-12-07 10:33:06.689885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.690036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.690059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.368 [2024-12-07 10:33:06.690073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:07.368 [2024-12-07 10:33:06.690090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.690202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.690223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.368 [2024-12-07 10:33:06.690236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:07.368 [2024-12-07 10:33:06.690249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.690322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.368 [2024-12-07 10:33:06.694963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.695005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.368 [2024-12-07 10:33:06.695023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:21:07.368 [2024-12-07 10:33:06.695037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.695139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.695156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.368 [2024-12-07 10:33:06.695171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:07.368 [2024-12-07 10:33:06.695182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.695283] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:07.368 [2024-12-07 10:33:06.695452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:07.368 [2024-12-07 10:33:06.695481] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.368 [2024-12-07 10:33:06.695496] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:07.368 [2024-12-07 10:33:06.695513] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.368 [2024-12-07 10:33:06.695527] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.368 [2024-12-07 10:33:06.695543] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:07.368 [2024-12-07 10:33:06.695556] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.368 [2024-12-07 10:33:06.695569] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:07.368 [2024-12-07 10:33:06.695580] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:07.368 [2024-12-07 10:33:06.695595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.695608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.368 [2024-12-07 10:33:06.695621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:21:07.368 [2024-12-07 10:33:06.695633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.695753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.368 [2024-12-07 10:33:06.695765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.368 [2024-12-07 10:33:06.695778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:07.368 [2024-12-07 10:33:06.695789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.368 [2024-12-07 10:33:06.695962] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.368 [2024-12-07 10:33:06.695990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.368 [2024-12-07 10:33:06.696009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.368 [2024-12-07 10:33:06.696045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.368 [2024-12-07 10:33:06.696083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.368 [2024-12-07 10:33:06.696106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.368 [2024-12-07 10:33:06.696115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:07.368 [2024-12-07 10:33:06.696128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.368 [2024-12-07 10:33:06.696138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.368 [2024-12-07 10:33:06.696151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:07.368 [2024-12-07 10:33:06.696162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.368 [2024-12-07 10:33:06.696186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.368 [2024-12-07 10:33:06.696220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.368 [2024-12-07 10:33:06.696252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.368 [2024-12-07 10:33:06.696286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.368 [2024-12-07 10:33:06.696317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.368 [2024-12-07 10:33:06.696354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.368 [2024-12-07 10:33:06.696393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.368 [2024-12-07 10:33:06.696404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:07.368 [2024-12-07 10:33:06.696416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.368 [2024-12-07 10:33:06.696427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:07.368 [2024-12-07 10:33:06.696439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:07.368 [2024-12-07 10:33:06.696449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:07.368 [2024-12-07 10:33:06.696471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:07.368 [2024-12-07 10:33:06.696484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.368 [2024-12-07 10:33:06.696507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.368 [2024-12-07 10:33:06.696518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.368 [2024-12-07 10:33:06.696545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.368 [2024-12-07 10:33:06.696561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.368 [2024-12-07 10:33:06.696571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.368 [2024-12-07 10:33:06.696583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.368 [2024-12-07 10:33:06.696593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.368 [2024-12-07 10:33:06.696605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.368 [2024-12-07 10:33:06.696627] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.368 [2024-12-07 10:33:06.696646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.368 [2024-12-07 10:33:06.696658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:07.368 [2024-12-07 10:33:06.696672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:07.368 [2024-12-07 10:33:06.696682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:07.368 [2024-12-07 10:33:06.696695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:07.368 [2024-12-07 10:33:06.696706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:07.369 [2024-12-07 10:33:06.696720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:07.369 [2024-12-07 10:33:06.696730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:07.369 [2024-12-07 10:33:06.696744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:07.369 [2024-12-07 10:33:06.696754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:07.369 [2024-12-07 10:33:06.696770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:07.369 [2024-12-07 10:33:06.696780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:07.369 [2024-12-07 10:33:06.696793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:07.369 [2024-12-07 10:33:06.696804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:07.369 [2024-12-07 10:33:06.696816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:07.369 [2024-12-07 10:33:06.696827] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:07.369 [2024-12-07 10:33:06.696842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.369 [2024-12-07 10:33:06.696856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:07.369 [2024-12-07 10:33:06.696869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:07.369 [2024-12-07 10:33:06.696880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:07.369 [2024-12-07 10:33:06.696893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:07.369 [2024-12-07 10:33:06.696906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.369 [2024-12-07 10:33:06.696919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:07.369 [2024-12-07 10:33:06.696930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:21:07.369 [2024-12-07 10:33:06.696943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.369 [2024-12-07 10:33:06.697128] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:07.369 [2024-12-07 10:33:06.697147] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:11.559 [2024-12-07 10:33:10.578009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.578094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:11.559 [2024-12-07 10:33:10.578112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3887.181 ms 00:21:11.559 [2024-12-07 10:33:10.578138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.613300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.613353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.559 [2024-12-07 10:33:10.613369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.766 ms 00:21:11.559 [2024-12-07 10:33:10.613383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.613544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.613565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.559 [2024-12-07 10:33:10.613578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:11.559 [2024-12-07 10:33:10.613593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.687173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.687222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.559 [2024-12-07 10:33:10.687242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.595 ms 00:21:11.559 [2024-12-07 10:33:10.687258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.687337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.687353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.559 [2024-12-07 10:33:10.687365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.559 [2024-12-07 10:33:10.687379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.687890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.687921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.559 [2024-12-07 10:33:10.687935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:21:11.559 [2024-12-07 10:33:10.687952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.688127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.688150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.559 [2024-12-07 10:33:10.688163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:11.559 [2024-12-07 10:33:10.688179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.708211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.708248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.559 [2024-12-07 10:33:10.708279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.998 ms 00:21:11.559 [2024-12-07 10:33:10.708293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.559 [2024-12-07 10:33:10.720360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:11.559 [2024-12-07 10:33:10.736372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.559 [2024-12-07 10:33:10.736409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.560 [2024-12-07 10:33:10.736429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.959 ms 00:21:11.560 [2024-12-07 10:33:10.736441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.560 [2024-12-07 10:33:10.845199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.560 [2024-12-07 10:33:10.845251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:11.560 [2024-12-07 10:33:10.845273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.857 ms 00:21:11.560 [2024-12-07 10:33:10.845285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.560 [2024-12-07 10:33:10.845552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.560 [2024-12-07 10:33:10.845572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.560 [2024-12-07 10:33:10.845590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:21:11.560 [2024-12-07 10:33:10.845601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.560 [2024-12-07 10:33:10.881947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.560 [2024-12-07 10:33:10.881993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:11.560 [2024-12-07 10:33:10.882009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.253 ms 00:21:11.560 [2024-12-07 10:33:10.882020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:10.916977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:10.917017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:11.820 [2024-12-07 10:33:10.917051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.916 ms 00:21:11.820 [2024-12-07 10:33:10.917062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:10.917858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:10.917889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.820 [2024-12-07 10:33:10.917905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:21:11.820 [2024-12-07 10:33:10.917917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.049247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:11.049285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:11.820 [2024-12-07 10:33:11.049306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 131.414 ms 00:21:11.820 [2024-12-07 10:33:11.049321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.085351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:11.085387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:11.820 [2024-12-07 10:33:11.085404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.931 ms 00:21:11.820 [2024-12-07 10:33:11.085416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.120048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:11.120082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:11.820 [2024-12-07 10:33:11.120099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.610 ms 00:21:11.820 [2024-12-07 10:33:11.120109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.156720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:11.156754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.820 [2024-12-07 10:33:11.156771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.597 ms 00:21:11.820 [2024-12-07 10:33:11.156783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.156885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:11.156899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.820 [2024-12-07 10:33:11.156918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:11.820 [2024-12-07 10:33:11.156929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.157153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.820 [2024-12-07 10:33:11.157175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.820 [2024-12-07 10:33:11.157189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:11.820 [2024-12-07 10:33:11.157202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.820 [2024-12-07 10:33:11.158596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4486.603 ms, result 0 00:21:11.820 { 00:21:11.820 "name": "ftl0", 00:21:11.820 "uuid": "11d39fcd-81c9-4668-94dd-5caeb95478c4" 00:21:11.820 } 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:12.080 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:12.339 [ 00:21:12.339 { 00:21:12.339 "name": "ftl0", 00:21:12.339 "aliases": [ 00:21:12.339 "11d39fcd-81c9-4668-94dd-5caeb95478c4" 00:21:12.339 ], 00:21:12.339 "product_name": "FTL disk", 00:21:12.339 "block_size": 4096, 00:21:12.339 "num_blocks": 20971520, 00:21:12.339 "uuid": "11d39fcd-81c9-4668-94dd-5caeb95478c4", 00:21:12.339 "assigned_rate_limits": { 00:21:12.339 "rw_ios_per_sec": 0, 00:21:12.339 "rw_mbytes_per_sec": 0, 00:21:12.339 "r_mbytes_per_sec": 0, 00:21:12.339 "w_mbytes_per_sec": 0 00:21:12.339 }, 00:21:12.339 "claimed": false, 00:21:12.339 "zoned": false, 00:21:12.339 "supported_io_types": { 00:21:12.339 "read": true, 00:21:12.339 "write": true, 00:21:12.339 "unmap": true, 00:21:12.339 "flush": true, 00:21:12.339 "reset": false, 00:21:12.339 "nvme_admin": false, 00:21:12.339 "nvme_io": false, 00:21:12.339 "nvme_io_md": false, 00:21:12.339 "write_zeroes": true, 00:21:12.339 "zcopy": false, 00:21:12.339 "get_zone_info": false, 00:21:12.339 "zone_management": false, 00:21:12.339 "zone_append": false, 00:21:12.339 "compare": false, 00:21:12.339 "compare_and_write": false, 00:21:12.339 "abort": false, 00:21:12.339 "seek_hole": false, 00:21:12.339 "seek_data": false, 00:21:12.339 "copy": false, 00:21:12.339 "nvme_iov_md": false 00:21:12.339 }, 00:21:12.339 "driver_specific": { 00:21:12.340 "ftl": { 00:21:12.340 "base_bdev": "be3cc213-bca4-43fd-907e-2f75ca909745", 00:21:12.340 "cache": "nvc0n1p0" 00:21:12.340 } 00:21:12.340 } 00:21:12.340 } 00:21:12.340 ] 00:21:12.340 10:33:11 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:12.340 10:33:11 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:12.340 10:33:11 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:12.598 10:33:11 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:12.598 10:33:11 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:12.858 [2024-12-07 10:33:11.959504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:11.959556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:12.858 [2024-12-07 10:33:11.959572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:12.858 [2024-12-07 10:33:11.959588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:11.959663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:12.858 [2024-12-07 10:33:11.963753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:11.963785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:12.858 [2024-12-07 10:33:11.963818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:21:12.858 [2024-12-07 10:33:11.963839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:11.964780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:11.964806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:12.858 [2024-12-07 10:33:11.964822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:21:12.858 [2024-12-07 10:33:11.964834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:11.967422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:11.967449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:12.858 [2024-12-07 10:33:11.967465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:21:12.858 [2024-12-07 10:33:11.967477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:11.972452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:11.972482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:12.858 [2024-12-07 10:33:11.972497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.927 ms 00:21:12.858 [2024-12-07 10:33:11.972508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.007952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.008005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:12.858 [2024-12-07 10:33:12.008037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.327 ms 00:21:12.858 [2024-12-07 10:33:12.008047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.029521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.029555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:12.858 [2024-12-07 10:33:12.029575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.415 ms 00:21:12.858 [2024-12-07 10:33:12.029587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.029945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.029993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:12.858 [2024-12-07 10:33:12.030009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:21:12.858 [2024-12-07 10:33:12.030020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.064797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.064830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:12.858 [2024-12-07 10:33:12.064846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.777 ms 00:21:12.858 [2024-12-07 10:33:12.064857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.098719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.098753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:12.858 [2024-12-07 10:33:12.098785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.838 ms 00:21:12.858 [2024-12-07 10:33:12.098797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.132853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.132885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:12.858 [2024-12-07 10:33:12.132900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.036 ms 00:21:12.858 [2024-12-07 10:33:12.132911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.167253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.858 [2024-12-07 10:33:12.167288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:12.858 [2024-12-07 10:33:12.167304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.157 ms 00:21:12.858 [2024-12-07 10:33:12.167315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.858 [2024-12-07 10:33:12.167387] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:12.858 [2024-12-07 10:33:12.167405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:12.858 [2024-12-07 10:33:12.167639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.167983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:12.859 [2024-12-07 10:33:12.168713] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:12.859 [2024-12-07 10:33:12.168727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11d39fcd-81c9-4668-94dd-5caeb95478c4 00:21:12.859 [2024-12-07 10:33:12.168737] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:12.859 [2024-12-07 10:33:12.168753] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:12.859 [2024-12-07 10:33:12.168763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:12.859 [2024-12-07 10:33:12.168779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:12.859 [2024-12-07 10:33:12.168790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:12.859 [2024-12-07 10:33:12.168802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:12.859 [2024-12-07 10:33:12.168813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:12.859 [2024-12-07 10:33:12.168824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:12.859 [2024-12-07 10:33:12.168833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:12.859 [2024-12-07 10:33:12.168849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.859 [2024-12-07 10:33:12.168859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:12.859 [2024-12-07 10:33:12.168874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:21:12.859 [2024-12-07 10:33:12.168884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.859 [2024-12-07 10:33:12.188314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.859 [2024-12-07 10:33:12.188348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:12.859 [2024-12-07 10:33:12.188363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.334 ms 00:21:12.859 [2024-12-07 10:33:12.188372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.859 [2024-12-07 10:33:12.188986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.859 [2024-12-07 10:33:12.189020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:12.859 [2024-12-07 10:33:12.189034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:21:12.859 [2024-12-07 10:33:12.189044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.118 [2024-12-07 10:33:12.255383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.118 [2024-12-07 10:33:12.255420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.118 [2024-12-07 10:33:12.255435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.118 [2024-12-07 10:33:12.255446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.118 [2024-12-07 10:33:12.255529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.118 [2024-12-07 10:33:12.255540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.118 [2024-12-07 10:33:12.255553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.118 [2024-12-07 10:33:12.255563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.118 [2024-12-07 10:33:12.255700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.118 [2024-12-07 10:33:12.255716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.118 [2024-12-07 10:33:12.255729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.118 [2024-12-07 10:33:12.255739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.118 [2024-12-07 10:33:12.255810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.118 [2024-12-07 10:33:12.255820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.118 [2024-12-07 10:33:12.255833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.118 [2024-12-07 10:33:12.255843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.118 [2024-12-07 10:33:12.382071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.118 [2024-12-07 10:33:12.382117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:13.118 [2024-12-07 10:33:12.382134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.118 [2024-12-07 10:33:12.382146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.481049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.481105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:13.378 [2024-12-07 10:33:12.481150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.481161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.481319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.481331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:13.378 [2024-12-07 10:33:12.481349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.481359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.481508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.481520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:13.378 [2024-12-07 10:33:12.481533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.481543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.481699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.481713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:13.378 [2024-12-07 10:33:12.481727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.481740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.481832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.481854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:13.378 [2024-12-07 10:33:12.481867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.481877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.481951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.481963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:13.378 [2024-12-07 10:33:12.481987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.482001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.482094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.378 [2024-12-07 10:33:12.482106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:13.378 [2024-12-07 10:33:12.482120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.378 [2024-12-07 10:33:12.482130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.378 [2024-12-07 10:33:12.482461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.776 ms, result 0 00:21:13.378 true 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76874 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76874 ']' 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76874 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76874 00:21:13.378 killing process with pid 76874 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76874' 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76874 00:21:13.378 10:33:12 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76874 00:21:18.652 10:33:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:18.652 10:33:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:18.652 10:33:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:18.652 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.652 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:18.653 10:33:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:18.653 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:18.653 fio-3.35 00:21:18.653 Starting 1 thread 00:21:24.015 00:21:24.015 test: (groupid=0, jobs=1): err= 0: pid=77091: Sat Dec 7 10:33:23 2024 00:21:24.015 read: IOPS=874, BW=58.1MiB/s (60.9MB/s)(255MiB/4381msec) 00:21:24.015 slat (nsec): min=4288, max=54253, avg=8475.32, stdev=3554.21 00:21:24.015 clat (usec): min=316, max=864, avg=518.04, stdev=61.94 00:21:24.015 lat (usec): min=324, max=877, avg=526.52, stdev=63.38 00:21:24.015 clat percentiles (usec): 00:21:24.015 | 1.00th=[ 379], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 465], 00:21:24.015 | 30.00th=[ 486], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 545], 00:21:24.015 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 594], 00:21:24.015 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 742], 99.95th=[ 775], 00:21:24.015 | 99.99th=[ 865] 00:21:24.015 write: IOPS=881, BW=58.5MiB/s (61.4MB/s)(256MiB/4376msec); 0 zone resets 00:21:24.015 slat (usec): min=15, max=116, avg=25.57, stdev= 8.00 00:21:24.015 clat (usec): min=361, max=961, avg=574.96, stdev=73.27 00:21:24.015 lat (usec): min=384, max=1000, avg=600.53, stdev=75.78 00:21:24.015 clat percentiles (usec): 00:21:24.015 | 1.00th=[ 420], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 510], 00:21:24.015 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:21:24.015 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 660], 95.00th=[ 676], 00:21:24.015 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 938], 99.95th=[ 955], 00:21:24.015 | 99.99th=[ 963] 00:21:24.015 bw ( KiB/s): min=56032, max=64464, per=100.00%, avg=60197.00, stdev=4068.93, samples=8 00:21:24.015 iops : min= 824, max= 948, avg=885.25, stdev=59.84, samples=8 00:21:24.015 lat (usec) : 500=25.69%, 750=73.46%, 1000=0.86% 00:21:24.015 cpu : usr=99.11%, sys=0.11%, ctx=6, majf=0, minf=1169 00:21:24.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.015 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:24.015 00:21:24.015 Run status group 0 (all jobs): 00:21:24.015 READ: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=255MiB (267MB), run=4381-4381msec 00:21:24.015 WRITE: bw=58.5MiB/s (61.4MB/s), 58.5MiB/s-58.5MiB/s (61.4MB/s-61.4MB/s), io=256MiB (269MB), run=4376-4376msec 00:21:25.925 ----------------------------------------------------- 00:21:25.925 Suppressions used: 00:21:25.925 count bytes template 00:21:25.925 1 5 /usr/src/fio/parse.c 00:21:25.925 1 8 libtcmalloc_minimal.so 00:21:25.925 1 904 libcrypto.so 00:21:25.925 ----------------------------------------------------- 00:21:25.925 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:25.925 10:33:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:26.184 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:26.184 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:26.184 fio-3.35 00:21:26.184 Starting 2 threads 00:21:58.407 00:21:58.407 first_half: (groupid=0, jobs=1): err= 0: pid=77200: Sat Dec 7 10:33:54 2024 00:21:58.407 read: IOPS=2352, BW=9409KiB/s (9635kB/s)(255MiB/27736msec) 00:21:58.407 slat (nsec): min=3465, max=61140, avg=8626.09, stdev=4229.96 00:21:58.407 clat (usec): min=1071, max=308822, avg=42409.90, stdev=21917.03 00:21:58.407 lat (usec): min=1087, max=308832, avg=42418.53, stdev=21917.82 00:21:58.407 clat percentiles (msec): 00:21:58.407 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:21:58.408 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:21:58.408 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 46], 95.00th=[ 68], 00:21:58.408 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 220], 99.95th=[ 234], 00:21:58.408 | 99.99th=[ 279] 00:21:58.408 write: IOPS=2835, BW=11.1MiB/s (11.6MB/s)(256MiB/23111msec); 0 zone resets 00:21:58.408 slat (usec): min=3, max=614, avg=10.18, stdev= 7.69 00:21:58.408 clat (usec): min=453, max=103386, avg=11904.32, stdev=20714.58 00:21:58.408 lat (usec): min=477, max=103404, avg=11914.50, stdev=20714.85 00:21:58.408 clat percentiles (usec): 00:21:58.408 | 1.00th=[ 1156], 5.00th=[ 1500], 10.00th=[ 1762], 20.00th=[ 2089], 00:21:58.408 | 30.00th=[ 2573], 40.00th=[ 4424], 50.00th=[ 6128], 60.00th=[ 7373], 00:21:58.408 | 70.00th=[ 8455], 80.00th=[ 12649], 90.00th=[ 16188], 95.00th=[ 84411], 00:21:58.408 | 99.00th=[ 95945], 99.50th=[ 96994], 99.90th=[100140], 99.95th=[101188], 00:21:58.408 | 99.99th=[102237] 00:21:58.408 bw ( KiB/s): min= 976, max=40672, per=97.78%, avg=20160.96, stdev=12839.69, samples=26 00:21:58.408 iops : min= 244, max=10168, avg=5040.23, stdev=3209.92, samples=26 00:21:58.408 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.14% 00:21:58.408 lat (msec) : 2=8.49%, 4=10.32%, 10=19.19%, 20=8.54%, 50=45.80% 00:21:58.408 lat (msec) : 100=5.85%, 250=1.60%, 500=0.01% 00:21:58.408 cpu : usr=99.17%, sys=0.26%, ctx=101, majf=0, minf=5581 00:21:58.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:58.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.408 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.408 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.408 second_half: (groupid=0, jobs=1): err= 0: pid=77201: Sat Dec 7 10:33:54 2024 00:21:58.408 read: IOPS=2332, BW=9331KiB/s (9555kB/s)(255MiB/28003msec) 00:21:58.408 slat (nsec): min=3524, max=43035, avg=9561.68, stdev=3229.05 00:21:58.408 clat (usec): min=927, max=316446, avg=41279.45, stdev=22944.25 00:21:58.408 lat (usec): min=936, max=316458, avg=41289.01, stdev=22944.74 00:21:58.408 clat percentiles (msec): 00:21:58.408 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:21:58.408 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:21:58.408 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 46], 95.00th=[ 56], 00:21:58.408 | 99.00th=[ 176], 99.50th=[ 199], 99.90th=[ 236], 99.95th=[ 257], 00:21:58.408 | 99.99th=[ 309] 00:21:58.408 write: IOPS=2577, BW=10.1MiB/s (10.6MB/s)(256MiB/25429msec); 0 zone resets 00:21:58.408 slat (usec): min=4, max=987, avg=10.58, stdev= 7.31 00:21:58.408 clat (usec): min=521, max=104262, avg=13520.18, stdev=22253.41 00:21:58.408 lat (usec): min=530, max=104274, avg=13530.75, stdev=22253.80 00:21:58.408 clat percentiles (usec): 00:21:58.408 | 1.00th=[ 1172], 5.00th=[ 1549], 10.00th=[ 1811], 20.00th=[ 2114], 00:21:58.408 | 30.00th=[ 2573], 40.00th=[ 4359], 50.00th=[ 6128], 60.00th=[ 7701], 00:21:58.408 | 70.00th=[ 9503], 80.00th=[ 13698], 90.00th=[ 38536], 95.00th=[ 86508], 00:21:58.408 | 99.00th=[ 96994], 99.50th=[ 98042], 99.90th=[101188], 99.95th=[102237], 00:21:58.408 | 99.99th=[103285] 00:21:58.408 bw ( KiB/s): min= 24, max=47384, per=90.80%, avg=18720.46, stdev=13773.07, samples=28 00:21:58.408 iops : min= 6, max=11846, avg=4680.07, stdev=3443.29, samples=28 00:21:58.408 lat (usec) : 750=0.02%, 1000=0.14% 00:21:58.408 lat (msec) : 2=8.00%, 4=10.89%, 10=17.12%, 20=9.72%, 50=47.87% 00:21:58.408 lat (msec) : 100=4.66%, 250=1.55%, 500=0.03% 00:21:58.408 cpu : usr=99.10%, sys=0.22%, ctx=37, majf=0, minf=5530 00:21:58.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:58.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.408 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.408 issued rwts: total=65321,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.408 00:21:58.408 Run status group 0 (all jobs): 00:21:58.408 READ: bw=18.2MiB/s (19.1MB/s), 9331KiB/s-9409KiB/s (9555kB/s-9635kB/s), io=510MiB (535MB), run=27736-28003msec 00:21:58.408 WRITE: bw=20.1MiB/s (21.1MB/s), 10.1MiB/s-11.1MiB/s (10.6MB/s-11.6MB/s), io=512MiB (537MB), run=23111-25429msec 00:21:58.408 ----------------------------------------------------- 00:21:58.408 Suppressions used: 00:21:58.408 count bytes template 00:21:58.408 2 10 /usr/src/fio/parse.c 00:21:58.408 2 192 /usr/src/fio/iolog.c 00:21:58.408 1 8 libtcmalloc_minimal.so 00:21:58.408 1 904 libcrypto.so 00:21:58.408 ----------------------------------------------------- 00:21:58.408 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:58.408 10:33:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:58.408 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:58.408 fio-3.35 00:21:58.408 Starting 1 thread 00:22:20.350 00:22:20.350 test: (groupid=0, jobs=1): err= 0: pid=77558: Sat Dec 7 10:34:15 2024 00:22:20.350 read: IOPS=6811, BW=26.6MiB/s (27.9MB/s)(255MiB/9573msec) 00:22:20.350 slat (nsec): min=3272, max=91650, avg=8847.25, stdev=4501.75 00:22:20.350 clat (usec): min=879, max=42896, avg=18778.97, stdev=2169.11 00:22:20.350 lat (usec): min=883, max=42908, avg=18787.82, stdev=2168.34 00:22:20.350 clat percentiles (usec): 00:22:20.350 | 1.00th=[16450], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:22:20.350 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[18220], 00:22:20.350 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[21890], 00:22:20.350 | 99.00th=[22938], 99.50th=[25297], 99.90th=[31851], 99.95th=[37487], 00:22:20.350 | 99.99th=[42206] 00:22:20.350 write: IOPS=8532, BW=33.3MiB/s (34.9MB/s)(256MiB/7681msec); 0 zone resets 00:22:20.350 slat (usec): min=4, max=1271, avg=10.17, stdev=11.20 00:22:20.350 clat (usec): min=681, max=73539, avg=14939.82, stdev=18101.97 00:22:20.350 lat (usec): min=687, max=73547, avg=14949.99, stdev=18102.03 00:22:20.350 clat percentiles (usec): 00:22:20.350 | 1.00th=[ 1139], 5.00th=[ 1532], 10.00th=[ 1778], 20.00th=[ 2114], 00:22:20.350 | 30.00th=[ 2474], 40.00th=[ 3195], 50.00th=[ 8848], 60.00th=[11469], 00:22:20.350 | 70.00th=[14746], 80.00th=[19268], 90.00th=[52691], 95.00th=[58459], 00:22:20.350 | 99.00th=[64750], 99.50th=[66323], 99.90th=[68682], 99.95th=[69731], 00:22:20.350 | 99.99th=[70779] 00:22:20.350 bw ( KiB/s): min= 9592, max=50296, per=96.01%, avg=32768.00, stdev=8504.39, samples=16 00:22:20.350 iops : min= 2398, max=12574, avg=8192.00, stdev=2126.10, samples=16 00:22:20.350 lat (usec) : 750=0.01%, 1000=0.16% 00:22:20.350 lat (msec) : 2=8.23%, 4=12.61%, 10=6.30%, 20=44.74%, 50=22.22% 00:22:20.350 lat (msec) : 100=5.73% 00:22:20.350 cpu : usr=98.78%, sys=0.40%, ctx=48, majf=0, minf=5565 00:22:20.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:20.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.351 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:20.351 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:20.351 00:22:20.351 Run status group 0 (all jobs): 00:22:20.351 READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=255MiB (267MB), run=9573-9573msec 00:22:20.351 WRITE: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=256MiB (268MB), run=7681-7681msec 00:22:20.351 ----------------------------------------------------- 00:22:20.351 Suppressions used: 00:22:20.351 count bytes template 00:22:20.351 1 5 /usr/src/fio/parse.c 00:22:20.351 2 192 /usr/src/fio/iolog.c 00:22:20.351 1 8 libtcmalloc_minimal.so 00:22:20.351 1 904 libcrypto.so 00:22:20.351 ----------------------------------------------------- 00:22:20.351 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:20.351 Remove shared memory files 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57734 /dev/shm/spdk_tgt_trace.pid75773 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:20.351 ************************************ 00:22:20.351 END TEST ftl_fio_basic 00:22:20.351 ************************************ 00:22:20.351 00:22:20.351 real 1m15.698s 00:22:20.351 user 2m45.372s 00:22:20.351 sys 0m4.030s 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.351 10:34:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:20.351 10:34:18 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:20.351 10:34:18 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:20.351 10:34:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.351 10:34:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:20.351 ************************************ 00:22:20.351 START TEST ftl_bdevperf 00:22:20.351 ************************************ 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:20.351 * Looking for test storage... 00:22:20.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:20.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.351 --rc genhtml_branch_coverage=1 00:22:20.351 --rc genhtml_function_coverage=1 00:22:20.351 --rc genhtml_legend=1 00:22:20.351 --rc geninfo_all_blocks=1 00:22:20.351 --rc geninfo_unexecuted_blocks=1 00:22:20.351 00:22:20.351 ' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:20.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.351 --rc genhtml_branch_coverage=1 00:22:20.351 --rc genhtml_function_coverage=1 00:22:20.351 --rc genhtml_legend=1 00:22:20.351 --rc geninfo_all_blocks=1 00:22:20.351 --rc geninfo_unexecuted_blocks=1 00:22:20.351 00:22:20.351 ' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:20.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.351 --rc genhtml_branch_coverage=1 00:22:20.351 --rc genhtml_function_coverage=1 00:22:20.351 --rc genhtml_legend=1 00:22:20.351 --rc geninfo_all_blocks=1 00:22:20.351 --rc geninfo_unexecuted_blocks=1 00:22:20.351 00:22:20.351 ' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:20.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.351 --rc genhtml_branch_coverage=1 00:22:20.351 --rc genhtml_function_coverage=1 00:22:20.351 --rc genhtml_legend=1 00:22:20.351 --rc geninfo_all_blocks=1 00:22:20.351 --rc geninfo_unexecuted_blocks=1 00:22:20.351 00:22:20.351 ' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.351 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77837 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77837 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77837 ']' 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.352 10:34:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:20.352 [2024-12-07 10:34:18.626905] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:20.352 [2024-12-07 10:34:18.627050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77837 ] 00:22:20.352 [2024-12-07 10:34:18.812127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.352 [2024-12-07 10:34:18.913589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:20.352 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:20.611 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:20.871 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:20.871 { 00:22:20.871 "name": "nvme0n1", 00:22:20.871 "aliases": [ 00:22:20.871 "d361d8c1-2129-4e8b-99c1-6854f808d37e" 00:22:20.871 ], 00:22:20.871 "product_name": "NVMe disk", 00:22:20.871 "block_size": 4096, 00:22:20.871 "num_blocks": 1310720, 00:22:20.871 "uuid": "d361d8c1-2129-4e8b-99c1-6854f808d37e", 00:22:20.871 "numa_id": -1, 00:22:20.871 "assigned_rate_limits": { 00:22:20.871 "rw_ios_per_sec": 0, 00:22:20.871 "rw_mbytes_per_sec": 0, 00:22:20.871 "r_mbytes_per_sec": 0, 00:22:20.871 "w_mbytes_per_sec": 0 00:22:20.871 }, 00:22:20.871 "claimed": true, 00:22:20.871 "claim_type": "read_many_write_one", 00:22:20.871 "zoned": false, 00:22:20.871 "supported_io_types": { 00:22:20.872 "read": true, 00:22:20.872 "write": true, 00:22:20.872 "unmap": true, 00:22:20.872 "flush": true, 00:22:20.872 "reset": true, 00:22:20.872 "nvme_admin": true, 00:22:20.872 "nvme_io": true, 00:22:20.872 "nvme_io_md": false, 00:22:20.872 "write_zeroes": true, 00:22:20.872 "zcopy": false, 00:22:20.872 "get_zone_info": false, 00:22:20.872 "zone_management": false, 00:22:20.872 "zone_append": false, 00:22:20.872 "compare": true, 00:22:20.872 "compare_and_write": false, 00:22:20.872 "abort": true, 00:22:20.872 "seek_hole": false, 00:22:20.872 "seek_data": false, 00:22:20.872 "copy": true, 00:22:20.872 "nvme_iov_md": false 00:22:20.872 }, 00:22:20.872 "driver_specific": { 00:22:20.872 "nvme": [ 00:22:20.872 { 00:22:20.872 "pci_address": "0000:00:11.0", 00:22:20.872 "trid": { 00:22:20.872 "trtype": "PCIe", 00:22:20.872 "traddr": "0000:00:11.0" 00:22:20.872 }, 00:22:20.872 "ctrlr_data": { 00:22:20.872 "cntlid": 0, 00:22:20.872 "vendor_id": "0x1b36", 00:22:20.872 "model_number": "QEMU NVMe Ctrl", 00:22:20.872 "serial_number": "12341", 00:22:20.872 "firmware_revision": "8.0.0", 00:22:20.872 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:20.872 "oacs": { 00:22:20.872 "security": 0, 00:22:20.872 "format": 1, 00:22:20.872 "firmware": 0, 00:22:20.872 "ns_manage": 1 00:22:20.872 }, 00:22:20.872 "multi_ctrlr": false, 00:22:20.872 "ana_reporting": false 00:22:20.872 }, 00:22:20.872 "vs": { 00:22:20.872 "nvme_version": "1.4" 00:22:20.872 }, 00:22:20.872 "ns_data": { 00:22:20.872 "id": 1, 00:22:20.872 "can_share": false 00:22:20.872 } 00:22:20.872 } 00:22:20.872 ], 00:22:20.872 "mp_policy": "active_passive" 00:22:20.872 } 00:22:20.872 } 00:22:20.872 ]' 00:22:20.872 10:34:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:20.872 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:21.131 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a21e2941-9c0b-4ea0-baba-2445ec0950b0 00:22:21.131 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:21.131 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a21e2941-9c0b-4ea0-baba-2445ec0950b0 00:22:21.391 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:21.391 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6c86e8e3-c285-4afc-bc9c-1044900333da 00:22:21.391 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6c86e8e3-c285-4afc-bc9c-1044900333da 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=3f31e534-31c1-4d56-914c-094ee293190d 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3f31e534-31c1-4d56-914c-094ee293190d 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3f31e534-31c1-4d56-914c-094ee293190d 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3f31e534-31c1-4d56-914c-094ee293190d 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3f31e534-31c1-4d56-914c-094ee293190d 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:21.651 10:34:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f31e534-31c1-4d56-914c-094ee293190d 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:21.911 { 00:22:21.911 "name": "3f31e534-31c1-4d56-914c-094ee293190d", 00:22:21.911 "aliases": [ 00:22:21.911 "lvs/nvme0n1p0" 00:22:21.911 ], 00:22:21.911 "product_name": "Logical Volume", 00:22:21.911 "block_size": 4096, 00:22:21.911 "num_blocks": 26476544, 00:22:21.911 "uuid": "3f31e534-31c1-4d56-914c-094ee293190d", 00:22:21.911 "assigned_rate_limits": { 00:22:21.911 "rw_ios_per_sec": 0, 00:22:21.911 "rw_mbytes_per_sec": 0, 00:22:21.911 "r_mbytes_per_sec": 0, 00:22:21.911 "w_mbytes_per_sec": 0 00:22:21.911 }, 00:22:21.911 "claimed": false, 00:22:21.911 "zoned": false, 00:22:21.911 "supported_io_types": { 00:22:21.911 "read": true, 00:22:21.911 "write": true, 00:22:21.911 "unmap": true, 00:22:21.911 "flush": false, 00:22:21.911 "reset": true, 00:22:21.911 "nvme_admin": false, 00:22:21.911 "nvme_io": false, 00:22:21.911 "nvme_io_md": false, 00:22:21.911 "write_zeroes": true, 00:22:21.911 "zcopy": false, 00:22:21.911 "get_zone_info": false, 00:22:21.911 "zone_management": false, 00:22:21.911 "zone_append": false, 00:22:21.911 "compare": false, 00:22:21.911 "compare_and_write": false, 00:22:21.911 "abort": false, 00:22:21.911 "seek_hole": true, 00:22:21.911 "seek_data": true, 00:22:21.911 "copy": false, 00:22:21.911 "nvme_iov_md": false 00:22:21.911 }, 00:22:21.911 "driver_specific": { 00:22:21.911 "lvol": { 00:22:21.911 "lvol_store_uuid": "6c86e8e3-c285-4afc-bc9c-1044900333da", 00:22:21.911 "base_bdev": "nvme0n1", 00:22:21.911 "thin_provision": true, 00:22:21.911 "num_allocated_clusters": 0, 00:22:21.911 "snapshot": false, 00:22:21.911 "clone": false, 00:22:21.911 "esnap_clone": false 00:22:21.911 } 00:22:21.911 } 00:22:21.911 } 00:22:21.911 ]' 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:21.911 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:21.912 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3f31e534-31c1-4d56-914c-094ee293190d 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3f31e534-31c1-4d56-914c-094ee293190d 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:22.171 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f31e534-31c1-4d56-914c-094ee293190d 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:22.432 { 00:22:22.432 "name": "3f31e534-31c1-4d56-914c-094ee293190d", 00:22:22.432 "aliases": [ 00:22:22.432 "lvs/nvme0n1p0" 00:22:22.432 ], 00:22:22.432 "product_name": "Logical Volume", 00:22:22.432 "block_size": 4096, 00:22:22.432 "num_blocks": 26476544, 00:22:22.432 "uuid": "3f31e534-31c1-4d56-914c-094ee293190d", 00:22:22.432 "assigned_rate_limits": { 00:22:22.432 "rw_ios_per_sec": 0, 00:22:22.432 "rw_mbytes_per_sec": 0, 00:22:22.432 "r_mbytes_per_sec": 0, 00:22:22.432 "w_mbytes_per_sec": 0 00:22:22.432 }, 00:22:22.432 "claimed": false, 00:22:22.432 "zoned": false, 00:22:22.432 "supported_io_types": { 00:22:22.432 "read": true, 00:22:22.432 "write": true, 00:22:22.432 "unmap": true, 00:22:22.432 "flush": false, 00:22:22.432 "reset": true, 00:22:22.432 "nvme_admin": false, 00:22:22.432 "nvme_io": false, 00:22:22.432 "nvme_io_md": false, 00:22:22.432 "write_zeroes": true, 00:22:22.432 "zcopy": false, 00:22:22.432 "get_zone_info": false, 00:22:22.432 "zone_management": false, 00:22:22.432 "zone_append": false, 00:22:22.432 "compare": false, 00:22:22.432 "compare_and_write": false, 00:22:22.432 "abort": false, 00:22:22.432 "seek_hole": true, 00:22:22.432 "seek_data": true, 00:22:22.432 "copy": false, 00:22:22.432 "nvme_iov_md": false 00:22:22.432 }, 00:22:22.432 "driver_specific": { 00:22:22.432 "lvol": { 00:22:22.432 "lvol_store_uuid": "6c86e8e3-c285-4afc-bc9c-1044900333da", 00:22:22.432 "base_bdev": "nvme0n1", 00:22:22.432 "thin_provision": true, 00:22:22.432 "num_allocated_clusters": 0, 00:22:22.432 "snapshot": false, 00:22:22.432 "clone": false, 00:22:22.432 "esnap_clone": false 00:22:22.432 } 00:22:22.432 } 00:22:22.432 } 00:22:22.432 ]' 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:22.432 10:34:21 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 3f31e534-31c1-4d56-914c-094ee293190d 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3f31e534-31c1-4d56-914c-094ee293190d 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:22.692 10:34:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f31e534-31c1-4d56-914c-094ee293190d 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:22.952 { 00:22:22.952 "name": "3f31e534-31c1-4d56-914c-094ee293190d", 00:22:22.952 "aliases": [ 00:22:22.952 "lvs/nvme0n1p0" 00:22:22.952 ], 00:22:22.952 "product_name": "Logical Volume", 00:22:22.952 "block_size": 4096, 00:22:22.952 "num_blocks": 26476544, 00:22:22.952 "uuid": "3f31e534-31c1-4d56-914c-094ee293190d", 00:22:22.952 "assigned_rate_limits": { 00:22:22.952 "rw_ios_per_sec": 0, 00:22:22.952 "rw_mbytes_per_sec": 0, 00:22:22.952 "r_mbytes_per_sec": 0, 00:22:22.952 "w_mbytes_per_sec": 0 00:22:22.952 }, 00:22:22.952 "claimed": false, 00:22:22.952 "zoned": false, 00:22:22.952 "supported_io_types": { 00:22:22.952 "read": true, 00:22:22.952 "write": true, 00:22:22.952 "unmap": true, 00:22:22.952 "flush": false, 00:22:22.952 "reset": true, 00:22:22.952 "nvme_admin": false, 00:22:22.952 "nvme_io": false, 00:22:22.952 "nvme_io_md": false, 00:22:22.952 "write_zeroes": true, 00:22:22.952 "zcopy": false, 00:22:22.952 "get_zone_info": false, 00:22:22.952 "zone_management": false, 00:22:22.952 "zone_append": false, 00:22:22.952 "compare": false, 00:22:22.952 "compare_and_write": false, 00:22:22.952 "abort": false, 00:22:22.952 "seek_hole": true, 00:22:22.952 "seek_data": true, 00:22:22.952 "copy": false, 00:22:22.952 "nvme_iov_md": false 00:22:22.952 }, 00:22:22.952 "driver_specific": { 00:22:22.952 "lvol": { 00:22:22.952 "lvol_store_uuid": "6c86e8e3-c285-4afc-bc9c-1044900333da", 00:22:22.952 "base_bdev": "nvme0n1", 00:22:22.952 "thin_provision": true, 00:22:22.952 "num_allocated_clusters": 0, 00:22:22.952 "snapshot": false, 00:22:22.952 "clone": false, 00:22:22.952 "esnap_clone": false 00:22:22.952 } 00:22:22.952 } 00:22:22.952 } 00:22:22.952 ]' 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:22.952 10:34:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3f31e534-31c1-4d56-914c-094ee293190d -c nvc0n1p0 --l2p_dram_limit 20 00:22:23.212 [2024-12-07 10:34:22.410812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.212 [2024-12-07 10:34:22.410861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:23.212 [2024-12-07 10:34:22.410877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:23.212 [2024-12-07 10:34:22.410890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.212 [2024-12-07 10:34:22.410957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.212 [2024-12-07 10:34:22.410972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.212 [2024-12-07 10:34:22.411001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:23.212 [2024-12-07 10:34:22.411029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.212 [2024-12-07 10:34:22.411050] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:23.213 [2024-12-07 10:34:22.412049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:23.213 [2024-12-07 10:34:22.412079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.412094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.213 [2024-12-07 10:34:22.412105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:22:23.213 [2024-12-07 10:34:22.412118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.412192] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9b2468e1-5b3c-4088-9a1f-4e382fb347ad 00:22:23.213 [2024-12-07 10:34:22.413788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.413904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:23.213 [2024-12-07 10:34:22.414001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:23.213 [2024-12-07 10:34:22.414040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.421712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.421854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.213 [2024-12-07 10:34:22.422085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.606 ms 00:22:23.213 [2024-12-07 10:34:22.422128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.422273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.422370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.213 [2024-12-07 10:34:22.422455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:23.213 [2024-12-07 10:34:22.422486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.422589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.422629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:23.213 [2024-12-07 10:34:22.422663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:23.213 [2024-12-07 10:34:22.422694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.422824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:23.213 [2024-12-07 10:34:22.427938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.428085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.213 [2024-12-07 10:34:22.428105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.140 ms 00:22:23.213 [2024-12-07 10:34:22.428124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.428160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.428174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:23.213 [2024-12-07 10:34:22.428185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:23.213 [2024-12-07 10:34:22.428198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.428240] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:23.213 [2024-12-07 10:34:22.428383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:23.213 [2024-12-07 10:34:22.428399] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:23.213 [2024-12-07 10:34:22.428416] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:23.213 [2024-12-07 10:34:22.428429] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:23.213 [2024-12-07 10:34:22.428445] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:23.213 [2024-12-07 10:34:22.428456] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:23.213 [2024-12-07 10:34:22.428468] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:23.213 [2024-12-07 10:34:22.428478] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:23.213 [2024-12-07 10:34:22.428492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:23.213 [2024-12-07 10:34:22.428505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.428517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:23.213 [2024-12-07 10:34:22.428528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:22:23.213 [2024-12-07 10:34:22.428542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.428614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.213 [2024-12-07 10:34:22.428628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:23.213 [2024-12-07 10:34:22.428639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:23.213 [2024-12-07 10:34:22.428654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.213 [2024-12-07 10:34:22.428730] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:23.213 [2024-12-07 10:34:22.428748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:23.213 [2024-12-07 10:34:22.428758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:23.213 [2024-12-07 10:34:22.428780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.428791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:23.213 [2024-12-07 10:34:22.428804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.428814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:23.213 [2024-12-07 10:34:22.428826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:23.213 [2024-12-07 10:34:22.428836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:23.213 [2024-12-07 10:34:22.428848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:23.213 [2024-12-07 10:34:22.428858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:23.213 [2024-12-07 10:34:22.428881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:23.213 [2024-12-07 10:34:22.428893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:23.213 [2024-12-07 10:34:22.428906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:23.213 [2024-12-07 10:34:22.428915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:23.213 [2024-12-07 10:34:22.428929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.428939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:23.213 [2024-12-07 10:34:22.428951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:23.213 [2024-12-07 10:34:22.428960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.428971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:23.213 [2024-12-07 10:34:22.428993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.213 [2024-12-07 10:34:22.429014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:23.213 [2024-12-07 10:34:22.429027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.213 [2024-12-07 10:34:22.429048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:23.213 [2024-12-07 10:34:22.429057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.213 [2024-12-07 10:34:22.429078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:23.213 [2024-12-07 10:34:22.429090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.213 [2024-12-07 10:34:22.429112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:23.213 [2024-12-07 10:34:22.429121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:23.213 [2024-12-07 10:34:22.429141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:23.213 [2024-12-07 10:34:22.429155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:23.213 [2024-12-07 10:34:22.429164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:23.213 [2024-12-07 10:34:22.429176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:23.213 [2024-12-07 10:34:22.429184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:23.213 [2024-12-07 10:34:22.429199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:23.213 [2024-12-07 10:34:22.429220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:23.213 [2024-12-07 10:34:22.429229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429241] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:23.213 [2024-12-07 10:34:22.429251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:23.213 [2024-12-07 10:34:22.429264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:23.213 [2024-12-07 10:34:22.429274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.213 [2024-12-07 10:34:22.429289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:23.213 [2024-12-07 10:34:22.429298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:23.213 [2024-12-07 10:34:22.429309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:23.213 [2024-12-07 10:34:22.429319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:23.213 [2024-12-07 10:34:22.429330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:23.213 [2024-12-07 10:34:22.429339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:23.214 [2024-12-07 10:34:22.429352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:23.214 [2024-12-07 10:34:22.429364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:23.214 [2024-12-07 10:34:22.429389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:23.214 [2024-12-07 10:34:22.429401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:23.214 [2024-12-07 10:34:22.429411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:23.214 [2024-12-07 10:34:22.429423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:23.214 [2024-12-07 10:34:22.429433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:23.214 [2024-12-07 10:34:22.429446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:23.214 [2024-12-07 10:34:22.429456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:23.214 [2024-12-07 10:34:22.429472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:23.214 [2024-12-07 10:34:22.429482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:23.214 [2024-12-07 10:34:22.429539] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:23.214 [2024-12-07 10:34:22.429550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:23.214 [2024-12-07 10:34:22.429577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:23.214 [2024-12-07 10:34:22.429590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:23.214 [2024-12-07 10:34:22.429601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:23.214 [2024-12-07 10:34:22.429615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.214 [2024-12-07 10:34:22.429626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:23.214 [2024-12-07 10:34:22.429640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:22:23.214 [2024-12-07 10:34:22.429649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.214 [2024-12-07 10:34:22.429689] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:23.214 [2024-12-07 10:34:22.429702] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:27.406 [2024-12-07 10:34:26.058910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.059001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:27.406 [2024-12-07 10:34:26.059023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3635.107 ms 00:22:27.406 [2024-12-07 10:34:26.059034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.093434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.093485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.406 [2024-12-07 10:34:26.093503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.111 ms 00:22:27.406 [2024-12-07 10:34:26.093514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.093633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.093645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:27.406 [2024-12-07 10:34:26.093661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:27.406 [2024-12-07 10:34:26.093671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.166104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.166151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.406 [2024-12-07 10:34:26.166172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.492 ms 00:22:27.406 [2024-12-07 10:34:26.166183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.166228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.166239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.406 [2024-12-07 10:34:26.166253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:27.406 [2024-12-07 10:34:26.166266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.166763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.166778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.406 [2024-12-07 10:34:26.166791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:22:27.406 [2024-12-07 10:34:26.166801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.166911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.166924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.406 [2024-12-07 10:34:26.166940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:27.406 [2024-12-07 10:34:26.166950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.186198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.186236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.406 [2024-12-07 10:34:26.186255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.255 ms 00:22:27.406 [2024-12-07 10:34:26.186278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.198693] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:27.406 [2024-12-07 10:34:26.204692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.204875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:27.406 [2024-12-07 10:34:26.204899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.362 ms 00:22:27.406 [2024-12-07 10:34:26.204913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.300170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.300235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:27.406 [2024-12-07 10:34:26.300252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.377 ms 00:22:27.406 [2024-12-07 10:34:26.300264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.300444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.300463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:27.406 [2024-12-07 10:34:26.300474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:22:27.406 [2024-12-07 10:34:26.300490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.334967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.335021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:27.406 [2024-12-07 10:34:26.335037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.484 ms 00:22:27.406 [2024-12-07 10:34:26.335051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.368678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.368718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:27.406 [2024-12-07 10:34:26.368732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.641 ms 00:22:27.406 [2024-12-07 10:34:26.368744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.369457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.369488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:27.406 [2024-12-07 10:34:26.369500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:22:27.406 [2024-12-07 10:34:26.369513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.468572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.468740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:27.406 [2024-12-07 10:34:26.468790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.166 ms 00:22:27.406 [2024-12-07 10:34:26.468803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.504852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.504894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:27.406 [2024-12-07 10:34:26.504911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.030 ms 00:22:27.406 [2024-12-07 10:34:26.504923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.539613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.539655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:27.406 [2024-12-07 10:34:26.539678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.707 ms 00:22:27.406 [2024-12-07 10:34:26.539705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.574263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.574303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:27.406 [2024-12-07 10:34:26.574316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.576 ms 00:22:27.406 [2024-12-07 10:34:26.574329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.574369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.574386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:27.406 [2024-12-07 10:34:26.574396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:27.406 [2024-12-07 10:34:26.574408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.574499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.406 [2024-12-07 10:34:26.574514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:27.406 [2024-12-07 10:34:26.574524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:27.406 [2024-12-07 10:34:26.574543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.406 [2024-12-07 10:34:26.575614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4171.123 ms, result 0 00:22:27.406 { 00:22:27.406 "name": "ftl0", 00:22:27.406 "uuid": "9b2468e1-5b3c-4088-9a1f-4e382fb347ad" 00:22:27.406 } 00:22:27.406 10:34:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:27.406 10:34:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:27.406 10:34:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:27.665 10:34:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:27.666 [2024-12-07 10:34:26.875564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:27.666 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:27.666 Zero copy mechanism will not be used. 00:22:27.666 Running I/O for 4 seconds... 00:22:29.542 1457.00 IOPS, 96.75 MiB/s [2024-12-07T10:34:30.275Z] 1466.00 IOPS, 97.35 MiB/s [2024-12-07T10:34:31.215Z] 1487.00 IOPS, 98.75 MiB/s [2024-12-07T10:34:31.215Z] 1508.50 IOPS, 100.17 MiB/s 00:22:31.862 Latency(us) 00:22:31.862 [2024-12-07T10:34:31.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.862 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:31.862 ftl0 : 4.00 1508.21 100.15 0.00 0.00 696.39 251.68 2158.21 00:22:31.862 [2024-12-07T10:34:31.215Z] =================================================================================================================== 00:22:31.862 [2024-12-07T10:34:31.216Z] Total : 1508.21 100.15 0.00 0.00 696.39 251.68 2158.21 00:22:31.863 [2024-12-07 10:34:30.880361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:31.863 { 00:22:31.863 "results": [ 00:22:31.863 { 00:22:31.863 "job": "ftl0", 00:22:31.863 "core_mask": "0x1", 00:22:31.863 "workload": "randwrite", 00:22:31.863 "status": "finished", 00:22:31.863 "queue_depth": 1, 00:22:31.863 "io_size": 69632, 00:22:31.863 "runtime": 4.002107, 00:22:31.863 "iops": 1508.205552725102, 00:22:31.863 "mibps": 100.15427498565131, 00:22:31.863 "io_failed": 0, 00:22:31.863 "io_timeout": 0, 00:22:31.863 "avg_latency_us": 696.3913206171272, 00:22:31.863 "min_latency_us": 251.68192771084338, 00:22:31.863 "max_latency_us": 2158.213654618474 00:22:31.863 } 00:22:31.863 ], 00:22:31.863 "core_count": 1 00:22:31.863 } 00:22:31.863 10:34:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:31.863 [2024-12-07 10:34:31.013140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:31.863 Running I/O for 4 seconds... 00:22:33.736 11704.00 IOPS, 45.72 MiB/s [2024-12-07T10:34:34.026Z] 11586.00 IOPS, 45.26 MiB/s [2024-12-07T10:34:35.403Z] 11156.67 IOPS, 43.58 MiB/s [2024-12-07T10:34:35.403Z] 11231.50 IOPS, 43.87 MiB/s 00:22:36.050 Latency(us) 00:22:36.050 [2024-12-07T10:34:35.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.050 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:36.050 ftl0 : 4.02 11219.60 43.83 0.00 0.00 11385.60 230.30 20318.79 00:22:36.050 [2024-12-07T10:34:35.403Z] =================================================================================================================== 00:22:36.050 [2024-12-07T10:34:35.403Z] Total : 11219.60 43.83 0.00 0.00 11385.60 0.00 20318.79 00:22:36.050 [2024-12-07 10:34:35.032158] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:36.050 { 00:22:36.050 "results": [ 00:22:36.050 { 00:22:36.050 "job": "ftl0", 00:22:36.050 "core_mask": "0x1", 00:22:36.050 "workload": "randwrite", 00:22:36.050 "status": "finished", 00:22:36.050 "queue_depth": 128, 00:22:36.050 "io_size": 4096, 00:22:36.050 "runtime": 4.015563, 00:22:36.050 "iops": 11219.597351604245, 00:22:36.050 "mibps": 43.82655215470408, 00:22:36.050 "io_failed": 0, 00:22:36.050 "io_timeout": 0, 00:22:36.050 "avg_latency_us": 11385.599881906155, 00:22:36.050 "min_latency_us": 230.29718875502007, 00:22:36.050 "max_latency_us": 20318.791967871486 00:22:36.050 } 00:22:36.050 ], 00:22:36.050 "core_count": 1 00:22:36.050 } 00:22:36.050 10:34:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:36.050 [2024-12-07 10:34:35.151827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:36.050 Running I/O for 4 seconds... 00:22:37.931 8711.00 IOPS, 34.03 MiB/s [2024-12-07T10:34:38.222Z] 8034.00 IOPS, 31.38 MiB/s [2024-12-07T10:34:39.164Z] 7831.00 IOPS, 30.59 MiB/s [2024-12-07T10:34:39.424Z] 8105.25 IOPS, 31.66 MiB/s 00:22:40.071 Latency(us) 00:22:40.071 [2024-12-07T10:34:39.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.071 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:40.071 Verification LBA range: start 0x0 length 0x1400000 00:22:40.071 ftl0 : 4.01 8117.92 31.71 0.00 0.00 15721.12 264.84 20529.35 00:22:40.071 [2024-12-07T10:34:39.424Z] =================================================================================================================== 00:22:40.071 [2024-12-07T10:34:39.424Z] Total : 8117.92 31.71 0.00 0.00 15721.12 0.00 20529.35 00:22:40.071 [2024-12-07 10:34:39.173839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:40.071 { 00:22:40.071 "results": [ 00:22:40.071 { 00:22:40.071 "job": "ftl0", 00:22:40.071 "core_mask": "0x1", 00:22:40.071 "workload": "verify", 00:22:40.071 "status": "finished", 00:22:40.071 "verify_range": { 00:22:40.071 "start": 0, 00:22:40.071 "length": 20971520 00:22:40.071 }, 00:22:40.071 "queue_depth": 128, 00:22:40.071 "io_size": 4096, 00:22:40.071 "runtime": 4.009402, 00:22:40.072 "iops": 8117.918831785887, 00:22:40.072 "mibps": 31.71062043666362, 00:22:40.072 "io_failed": 0, 00:22:40.072 "io_timeout": 0, 00:22:40.072 "avg_latency_us": 15721.121080216159, 00:22:40.072 "min_latency_us": 264.8417670682731, 00:22:40.072 "max_latency_us": 20529.349397590362 00:22:40.072 } 00:22:40.072 ], 00:22:40.072 "core_count": 1 00:22:40.072 } 00:22:40.072 10:34:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:40.072 [2024-12-07 10:34:39.381238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.072 [2024-12-07 10:34:39.381289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.072 [2024-12-07 10:34:39.381305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:40.072 [2024-12-07 10:34:39.381318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.072 [2024-12-07 10:34:39.381340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.072 [2024-12-07 10:34:39.385533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.072 [2024-12-07 10:34:39.385672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.072 [2024-12-07 10:34:39.385716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.178 ms 00:22:40.072 [2024-12-07 10:34:39.385727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.072 [2024-12-07 10:34:39.387618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.072 [2024-12-07 10:34:39.387657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.072 [2024-12-07 10:34:39.387677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:22:40.072 [2024-12-07 10:34:39.387688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.332 [2024-12-07 10:34:39.596497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.332 [2024-12-07 10:34:39.596556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.332 [2024-12-07 10:34:39.596594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 209.123 ms 00:22:40.332 [2024-12-07 10:34:39.596606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.332 [2024-12-07 10:34:39.601501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.332 [2024-12-07 10:34:39.601533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.332 [2024-12-07 10:34:39.601547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.850 ms 00:22:40.332 [2024-12-07 10:34:39.601577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.332 [2024-12-07 10:34:39.636494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.332 [2024-12-07 10:34:39.636529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.332 [2024-12-07 10:34:39.636545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.906 ms 00:22:40.332 [2024-12-07 10:34:39.636554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.332 [2024-12-07 10:34:39.658139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.332 [2024-12-07 10:34:39.658317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.332 [2024-12-07 10:34:39.658344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.579 ms 00:22:40.332 [2024-12-07 10:34:39.658355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.332 [2024-12-07 10:34:39.658516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.332 [2024-12-07 10:34:39.658530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.332 [2024-12-07 10:34:39.658555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:40.332 [2024-12-07 10:34:39.658566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.593 [2024-12-07 10:34:39.693763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.593 [2024-12-07 10:34:39.693898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.593 [2024-12-07 10:34:39.693922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.233 ms 00:22:40.593 [2024-12-07 10:34:39.693948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.593 [2024-12-07 10:34:39.728274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.593 [2024-12-07 10:34:39.728308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.593 [2024-12-07 10:34:39.728324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.311 ms 00:22:40.593 [2024-12-07 10:34:39.728333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.593 [2024-12-07 10:34:39.762375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.593 [2024-12-07 10:34:39.762501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.593 [2024-12-07 10:34:39.762525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.056 ms 00:22:40.593 [2024-12-07 10:34:39.762558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.593 [2024-12-07 10:34:39.796736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.593 [2024-12-07 10:34:39.796877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.593 [2024-12-07 10:34:39.796905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.142 ms 00:22:40.593 [2024-12-07 10:34:39.796915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.593 [2024-12-07 10:34:39.796969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.593 [2024-12-07 10:34:39.797000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.593 [2024-12-07 10:34:39.797100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.797999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.594 [2024-12-07 10:34:39.798220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.595 [2024-12-07 10:34:39.798231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.595 [2024-12-07 10:34:39.798244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.595 [2024-12-07 10:34:39.798261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.595 [2024-12-07 10:34:39.798274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b2468e1-5b3c-4088-9a1f-4e382fb347ad 00:22:40.595 [2024-12-07 10:34:39.798287] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.595 [2024-12-07 10:34:39.798300] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.595 [2024-12-07 10:34:39.798310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.595 [2024-12-07 10:34:39.798323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.595 [2024-12-07 10:34:39.798332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.595 [2024-12-07 10:34:39.798345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.595 [2024-12-07 10:34:39.798354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.595 [2024-12-07 10:34:39.798369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.595 [2024-12-07 10:34:39.798378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.595 [2024-12-07 10:34:39.798390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.595 [2024-12-07 10:34:39.798400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.595 [2024-12-07 10:34:39.798413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:22:40.595 [2024-12-07 10:34:39.798423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.595 [2024-12-07 10:34:39.817460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.595 [2024-12-07 10:34:39.817492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.595 [2024-12-07 10:34:39.817507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.999 ms 00:22:40.595 [2024-12-07 10:34:39.817516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.595 [2024-12-07 10:34:39.818122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.595 [2024-12-07 10:34:39.818134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.595 [2024-12-07 10:34:39.818163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:22:40.595 [2024-12-07 10:34:39.818173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.595 [2024-12-07 10:34:39.871554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.595 [2024-12-07 10:34:39.871591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.595 [2024-12-07 10:34:39.871610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.595 [2024-12-07 10:34:39.871620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.595 [2024-12-07 10:34:39.871686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.595 [2024-12-07 10:34:39.871697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.595 [2024-12-07 10:34:39.871709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.595 [2024-12-07 10:34:39.871719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.595 [2024-12-07 10:34:39.871799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.595 [2024-12-07 10:34:39.871812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.595 [2024-12-07 10:34:39.871825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.595 [2024-12-07 10:34:39.871835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.595 [2024-12-07 10:34:39.871855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.595 [2024-12-07 10:34:39.871865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.595 [2024-12-07 10:34:39.871878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.595 [2024-12-07 10:34:39.871888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:39.990785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:39.991011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.854 [2024-12-07 10:34:39.991044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:39.991056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.090585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.090838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.854 [2024-12-07 10:34:40.090868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.090880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.091040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.854 [2024-12-07 10:34:40.091053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.091063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.091134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.854 [2024-12-07 10:34:40.091148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.091159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.091291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.854 [2024-12-07 10:34:40.091308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.091318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.091377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:40.854 [2024-12-07 10:34:40.091389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.091399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.091455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.854 [2024-12-07 10:34:40.091467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.091488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.854 [2024-12-07 10:34:40.091545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.854 [2024-12-07 10:34:40.091559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.854 [2024-12-07 10:34:40.091569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.854 [2024-12-07 10:34:40.091699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 711.573 ms, result 0 00:22:40.854 true 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77837 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77837 ']' 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77837 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77837 00:22:40.854 killing process with pid 77837 00:22:40.854 Received shutdown signal, test time was about 4.000000 seconds 00:22:40.854 00:22:40.854 Latency(us) 00:22:40.854 [2024-12-07T10:34:40.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.854 [2024-12-07T10:34:40.207Z] =================================================================================================================== 00:22:40.854 [2024-12-07T10:34:40.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77837' 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77837 00:22:40.854 10:34:40 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77837 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:42.233 Remove shared memory files 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:42.233 ************************************ 00:22:42.233 END TEST ftl_bdevperf 00:22:42.233 ************************************ 00:22:42.233 00:22:42.233 real 0m23.173s 00:22:42.233 user 0m25.692s 00:22:42.233 sys 0m1.232s 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.233 10:34:41 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:42.233 10:34:41 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:42.233 10:34:41 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:42.233 10:34:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.233 10:34:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:42.233 ************************************ 00:22:42.233 START TEST ftl_trim 00:22:42.233 ************************************ 00:22:42.233 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:42.493 * Looking for test storage... 00:22:42.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.493 10:34:41 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.493 --rc genhtml_branch_coverage=1 00:22:42.493 --rc genhtml_function_coverage=1 00:22:42.493 --rc genhtml_legend=1 00:22:42.493 --rc geninfo_all_blocks=1 00:22:42.493 --rc geninfo_unexecuted_blocks=1 00:22:42.493 00:22:42.493 ' 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.493 --rc genhtml_branch_coverage=1 00:22:42.493 --rc genhtml_function_coverage=1 00:22:42.493 --rc genhtml_legend=1 00:22:42.493 --rc geninfo_all_blocks=1 00:22:42.493 --rc geninfo_unexecuted_blocks=1 00:22:42.493 00:22:42.493 ' 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.493 --rc genhtml_branch_coverage=1 00:22:42.493 --rc genhtml_function_coverage=1 00:22:42.493 --rc genhtml_legend=1 00:22:42.493 --rc geninfo_all_blocks=1 00:22:42.493 --rc geninfo_unexecuted_blocks=1 00:22:42.493 00:22:42.493 ' 00:22:42.493 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.493 --rc genhtml_branch_coverage=1 00:22:42.493 --rc genhtml_function_coverage=1 00:22:42.493 --rc genhtml_legend=1 00:22:42.493 --rc geninfo_all_blocks=1 00:22:42.493 --rc geninfo_unexecuted_blocks=1 00:22:42.493 00:22:42.493 ' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:42.493 10:34:41 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78191 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:42.494 10:34:41 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78191 00:22:42.494 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78191 ']' 00:22:42.494 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.494 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.494 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.494 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.494 10:34:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:42.753 [2024-12-07 10:34:41.885753] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:42.753 [2024-12-07 10:34:41.885889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78191 ] 00:22:42.753 [2024-12-07 10:34:42.067148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:43.013 [2024-12-07 10:34:42.181112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.013 [2024-12-07 10:34:42.181240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.013 [2024-12-07 10:34:42.181274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.953 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.953 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:43.953 10:34:43 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:43.953 10:34:43 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:43.953 10:34:43 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:43.953 10:34:43 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:43.953 10:34:43 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:43.953 10:34:43 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:44.213 10:34:43 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:44.213 10:34:43 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:44.213 10:34:43 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:44.213 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:44.213 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:44.213 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:44.213 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:44.213 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:44.213 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:44.213 { 00:22:44.213 "name": "nvme0n1", 00:22:44.213 "aliases": [ 00:22:44.213 "230bf753-537d-4965-b0ba-1a0636989ef8" 00:22:44.213 ], 00:22:44.213 "product_name": "NVMe disk", 00:22:44.213 "block_size": 4096, 00:22:44.213 "num_blocks": 1310720, 00:22:44.213 "uuid": "230bf753-537d-4965-b0ba-1a0636989ef8", 00:22:44.213 "numa_id": -1, 00:22:44.214 "assigned_rate_limits": { 00:22:44.214 "rw_ios_per_sec": 0, 00:22:44.214 "rw_mbytes_per_sec": 0, 00:22:44.214 "r_mbytes_per_sec": 0, 00:22:44.214 "w_mbytes_per_sec": 0 00:22:44.214 }, 00:22:44.214 "claimed": true, 00:22:44.214 "claim_type": "read_many_write_one", 00:22:44.214 "zoned": false, 00:22:44.214 "supported_io_types": { 00:22:44.214 "read": true, 00:22:44.214 "write": true, 00:22:44.214 "unmap": true, 00:22:44.214 "flush": true, 00:22:44.214 "reset": true, 00:22:44.214 "nvme_admin": true, 00:22:44.214 "nvme_io": true, 00:22:44.214 "nvme_io_md": false, 00:22:44.214 "write_zeroes": true, 00:22:44.214 "zcopy": false, 00:22:44.214 "get_zone_info": false, 00:22:44.214 "zone_management": false, 00:22:44.214 "zone_append": false, 00:22:44.214 "compare": true, 00:22:44.214 "compare_and_write": false, 00:22:44.214 "abort": true, 00:22:44.214 "seek_hole": false, 00:22:44.214 "seek_data": false, 00:22:44.214 "copy": true, 00:22:44.214 "nvme_iov_md": false 00:22:44.214 }, 00:22:44.214 "driver_specific": { 00:22:44.214 "nvme": [ 00:22:44.214 { 00:22:44.214 "pci_address": "0000:00:11.0", 00:22:44.214 "trid": { 00:22:44.214 "trtype": "PCIe", 00:22:44.214 "traddr": "0000:00:11.0" 00:22:44.214 }, 00:22:44.214 "ctrlr_data": { 00:22:44.214 "cntlid": 0, 00:22:44.214 "vendor_id": "0x1b36", 00:22:44.214 "model_number": "QEMU NVMe Ctrl", 00:22:44.214 "serial_number": "12341", 00:22:44.214 "firmware_revision": "8.0.0", 00:22:44.214 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:44.214 "oacs": { 00:22:44.214 "security": 0, 00:22:44.214 "format": 1, 00:22:44.214 "firmware": 0, 00:22:44.214 "ns_manage": 1 00:22:44.214 }, 00:22:44.214 "multi_ctrlr": false, 00:22:44.214 "ana_reporting": false 00:22:44.214 }, 00:22:44.214 "vs": { 00:22:44.214 "nvme_version": "1.4" 00:22:44.214 }, 00:22:44.214 "ns_data": { 00:22:44.214 "id": 1, 00:22:44.214 "can_share": false 00:22:44.214 } 00:22:44.214 } 00:22:44.214 ], 00:22:44.214 "mp_policy": "active_passive" 00:22:44.214 } 00:22:44.214 } 00:22:44.214 ]' 00:22:44.214 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:44.475 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:44.475 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:44.475 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:44.475 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:44.475 10:34:43 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6c86e8e3-c285-4afc-bc9c-1044900333da 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:44.475 10:34:43 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c86e8e3-c285-4afc-bc9c-1044900333da 00:22:44.734 10:34:44 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:44.993 10:34:44 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a7765099-84bd-474a-b32b-dcf03486312e 00:22:44.993 10:34:44 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a7765099-84bd-474a-b32b-dcf03486312e 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:45.253 10:34:44 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.253 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.253 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:45.253 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:45.253 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:45.253 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:45.513 { 00:22:45.513 "name": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:45.513 "aliases": [ 00:22:45.513 "lvs/nvme0n1p0" 00:22:45.513 ], 00:22:45.513 "product_name": "Logical Volume", 00:22:45.513 "block_size": 4096, 00:22:45.513 "num_blocks": 26476544, 00:22:45.513 "uuid": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:45.513 "assigned_rate_limits": { 00:22:45.513 "rw_ios_per_sec": 0, 00:22:45.513 "rw_mbytes_per_sec": 0, 00:22:45.513 "r_mbytes_per_sec": 0, 00:22:45.513 "w_mbytes_per_sec": 0 00:22:45.513 }, 00:22:45.513 "claimed": false, 00:22:45.513 "zoned": false, 00:22:45.513 "supported_io_types": { 00:22:45.513 "read": true, 00:22:45.513 "write": true, 00:22:45.513 "unmap": true, 00:22:45.513 "flush": false, 00:22:45.513 "reset": true, 00:22:45.513 "nvme_admin": false, 00:22:45.513 "nvme_io": false, 00:22:45.513 "nvme_io_md": false, 00:22:45.513 "write_zeroes": true, 00:22:45.513 "zcopy": false, 00:22:45.513 "get_zone_info": false, 00:22:45.513 "zone_management": false, 00:22:45.513 "zone_append": false, 00:22:45.513 "compare": false, 00:22:45.513 "compare_and_write": false, 00:22:45.513 "abort": false, 00:22:45.513 "seek_hole": true, 00:22:45.513 "seek_data": true, 00:22:45.513 "copy": false, 00:22:45.513 "nvme_iov_md": false 00:22:45.513 }, 00:22:45.513 "driver_specific": { 00:22:45.513 "lvol": { 00:22:45.513 "lvol_store_uuid": "a7765099-84bd-474a-b32b-dcf03486312e", 00:22:45.513 "base_bdev": "nvme0n1", 00:22:45.513 "thin_provision": true, 00:22:45.513 "num_allocated_clusters": 0, 00:22:45.513 "snapshot": false, 00:22:45.513 "clone": false, 00:22:45.513 "esnap_clone": false 00:22:45.513 } 00:22:45.513 } 00:22:45.513 } 00:22:45.513 ]' 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:45.513 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:45.513 10:34:44 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:45.513 10:34:44 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:45.513 10:34:44 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:45.773 10:34:44 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:45.773 10:34:44 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:45.773 10:34:44 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.773 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:45.773 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:45.773 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:45.773 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:45.773 10:34:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:46.031 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:46.031 { 00:22:46.031 "name": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:46.031 "aliases": [ 00:22:46.031 "lvs/nvme0n1p0" 00:22:46.031 ], 00:22:46.031 "product_name": "Logical Volume", 00:22:46.031 "block_size": 4096, 00:22:46.031 "num_blocks": 26476544, 00:22:46.031 "uuid": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:46.031 "assigned_rate_limits": { 00:22:46.031 "rw_ios_per_sec": 0, 00:22:46.031 "rw_mbytes_per_sec": 0, 00:22:46.031 "r_mbytes_per_sec": 0, 00:22:46.031 "w_mbytes_per_sec": 0 00:22:46.031 }, 00:22:46.031 "claimed": false, 00:22:46.031 "zoned": false, 00:22:46.031 "supported_io_types": { 00:22:46.031 "read": true, 00:22:46.031 "write": true, 00:22:46.031 "unmap": true, 00:22:46.031 "flush": false, 00:22:46.031 "reset": true, 00:22:46.031 "nvme_admin": false, 00:22:46.031 "nvme_io": false, 00:22:46.031 "nvme_io_md": false, 00:22:46.031 "write_zeroes": true, 00:22:46.031 "zcopy": false, 00:22:46.031 "get_zone_info": false, 00:22:46.031 "zone_management": false, 00:22:46.031 "zone_append": false, 00:22:46.031 "compare": false, 00:22:46.031 "compare_and_write": false, 00:22:46.031 "abort": false, 00:22:46.031 "seek_hole": true, 00:22:46.031 "seek_data": true, 00:22:46.031 "copy": false, 00:22:46.031 "nvme_iov_md": false 00:22:46.031 }, 00:22:46.031 "driver_specific": { 00:22:46.031 "lvol": { 00:22:46.031 "lvol_store_uuid": "a7765099-84bd-474a-b32b-dcf03486312e", 00:22:46.031 "base_bdev": "nvme0n1", 00:22:46.031 "thin_provision": true, 00:22:46.031 "num_allocated_clusters": 0, 00:22:46.031 "snapshot": false, 00:22:46.031 "clone": false, 00:22:46.031 "esnap_clone": false 00:22:46.031 } 00:22:46.031 } 00:22:46.031 } 00:22:46.031 ]' 00:22:46.031 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:46.031 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:46.031 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:46.032 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:46.032 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:46.032 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:46.032 10:34:45 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:46.032 10:34:45 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:46.289 10:34:45 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:46.289 10:34:45 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:46.289 10:34:45 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:46.289 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:46.289 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:46.289 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:46.289 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:46.289 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:46.548 { 00:22:46.548 "name": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:46.548 "aliases": [ 00:22:46.548 "lvs/nvme0n1p0" 00:22:46.548 ], 00:22:46.548 "product_name": "Logical Volume", 00:22:46.548 "block_size": 4096, 00:22:46.548 "num_blocks": 26476544, 00:22:46.548 "uuid": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:46.548 "assigned_rate_limits": { 00:22:46.548 "rw_ios_per_sec": 0, 00:22:46.548 "rw_mbytes_per_sec": 0, 00:22:46.548 "r_mbytes_per_sec": 0, 00:22:46.548 "w_mbytes_per_sec": 0 00:22:46.548 }, 00:22:46.548 "claimed": false, 00:22:46.548 "zoned": false, 00:22:46.548 "supported_io_types": { 00:22:46.548 "read": true, 00:22:46.548 "write": true, 00:22:46.548 "unmap": true, 00:22:46.548 "flush": false, 00:22:46.548 "reset": true, 00:22:46.548 "nvme_admin": false, 00:22:46.548 "nvme_io": false, 00:22:46.548 "nvme_io_md": false, 00:22:46.548 "write_zeroes": true, 00:22:46.548 "zcopy": false, 00:22:46.548 "get_zone_info": false, 00:22:46.548 "zone_management": false, 00:22:46.548 "zone_append": false, 00:22:46.548 "compare": false, 00:22:46.548 "compare_and_write": false, 00:22:46.548 "abort": false, 00:22:46.548 "seek_hole": true, 00:22:46.548 "seek_data": true, 00:22:46.548 "copy": false, 00:22:46.548 "nvme_iov_md": false 00:22:46.548 }, 00:22:46.548 "driver_specific": { 00:22:46.548 "lvol": { 00:22:46.548 "lvol_store_uuid": "a7765099-84bd-474a-b32b-dcf03486312e", 00:22:46.548 "base_bdev": "nvme0n1", 00:22:46.548 "thin_provision": true, 00:22:46.548 "num_allocated_clusters": 0, 00:22:46.548 "snapshot": false, 00:22:46.548 "clone": false, 00:22:46.548 "esnap_clone": false 00:22:46.548 } 00:22:46.548 } 00:22:46.548 } 00:22:46.548 ]' 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:46.548 10:34:45 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:46.548 10:34:45 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:46.548 10:34:45 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1af2335e-1fbd-4495-a9f1-68aef7e1c6d4 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:46.807 [2024-12-07 10:34:45.941384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.807 [2024-12-07 10:34:45.941431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:46.807 [2024-12-07 10:34:45.941468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:46.807 [2024-12-07 10:34:45.941479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.807 [2024-12-07 10:34:45.945088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.807 [2024-12-07 10:34:45.945129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.807 [2024-12-07 10:34:45.945144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:22:46.807 [2024-12-07 10:34:45.945154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.807 [2024-12-07 10:34:45.945333] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:46.807 [2024-12-07 10:34:45.946301] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:46.807 [2024-12-07 10:34:45.946340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.807 [2024-12-07 10:34:45.946351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.807 [2024-12-07 10:34:45.946364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:22:46.807 [2024-12-07 10:34:45.946374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.807 [2024-12-07 10:34:45.946523] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:22:46.807 [2024-12-07 10:34:45.947987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.948025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:46.808 [2024-12-07 10:34:45.948037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:46.808 [2024-12-07 10:34:45.948050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.955700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.955735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.808 [2024-12-07 10:34:45.955750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.509 ms 00:22:46.808 [2024-12-07 10:34:45.955762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.955963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.955999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.808 [2024-12-07 10:34:45.956011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:46.808 [2024-12-07 10:34:45.956028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.956090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.956121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:46.808 [2024-12-07 10:34:45.956132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:46.808 [2024-12-07 10:34:45.956148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.956201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:46.808 [2024-12-07 10:34:45.961439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.961474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.808 [2024-12-07 10:34:45.961491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.250 ms 00:22:46.808 [2024-12-07 10:34:45.961501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.961599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.961627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:46.808 [2024-12-07 10:34:45.961641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:46.808 [2024-12-07 10:34:45.961651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.961708] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:46.808 [2024-12-07 10:34:45.961839] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:46.808 [2024-12-07 10:34:45.961858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:46.808 [2024-12-07 10:34:45.961871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:46.808 [2024-12-07 10:34:45.961887] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:46.808 [2024-12-07 10:34:45.961899] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:46.808 [2024-12-07 10:34:45.961912] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:46.808 [2024-12-07 10:34:45.961922] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:46.808 [2024-12-07 10:34:45.961935] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:46.808 [2024-12-07 10:34:45.961947] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:46.808 [2024-12-07 10:34:45.961960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.961971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:46.808 [2024-12-07 10:34:45.962001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:22:46.808 [2024-12-07 10:34:45.962011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.962143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.808 [2024-12-07 10:34:45.962155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:46.808 [2024-12-07 10:34:45.962169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:46.808 [2024-12-07 10:34:45.962179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.808 [2024-12-07 10:34:45.962332] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:46.808 [2024-12-07 10:34:45.962346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:46.808 [2024-12-07 10:34:45.962359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:46.808 [2024-12-07 10:34:45.962392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:46.808 [2024-12-07 10:34:45.962425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.808 [2024-12-07 10:34:45.962446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:46.808 [2024-12-07 10:34:45.962455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:46.808 [2024-12-07 10:34:45.962468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.808 [2024-12-07 10:34:45.962477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:46.808 [2024-12-07 10:34:45.962489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:46.808 [2024-12-07 10:34:45.962498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:46.808 [2024-12-07 10:34:45.962522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:46.808 [2024-12-07 10:34:45.962565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:46.808 [2024-12-07 10:34:45.962596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:46.808 [2024-12-07 10:34:45.962630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:46.808 [2024-12-07 10:34:45.962661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:46.808 [2024-12-07 10:34:45.962696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.808 [2024-12-07 10:34:45.962717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:46.808 [2024-12-07 10:34:45.962726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:46.808 [2024-12-07 10:34:45.962738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.808 [2024-12-07 10:34:45.962748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:46.808 [2024-12-07 10:34:45.962761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:46.808 [2024-12-07 10:34:45.962770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:46.808 [2024-12-07 10:34:45.962791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:46.808 [2024-12-07 10:34:45.962803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962812] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:46.808 [2024-12-07 10:34:45.962824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:46.808 [2024-12-07 10:34:45.962834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.808 [2024-12-07 10:34:45.962858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:46.808 [2024-12-07 10:34:45.962872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:46.808 [2024-12-07 10:34:45.962882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:46.808 [2024-12-07 10:34:45.962894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:46.808 [2024-12-07 10:34:45.962904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:46.808 [2024-12-07 10:34:45.962916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:46.808 [2024-12-07 10:34:45.962928] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:46.808 [2024-12-07 10:34:45.962944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.808 [2024-12-07 10:34:45.962959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:46.808 [2024-12-07 10:34:45.962974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:46.808 [2024-12-07 10:34:45.962995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:46.808 [2024-12-07 10:34:45.963009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:46.809 [2024-12-07 10:34:45.963019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:46.809 [2024-12-07 10:34:45.963032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:46.809 [2024-12-07 10:34:45.963042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:46.809 [2024-12-07 10:34:45.963055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:46.809 [2024-12-07 10:34:45.963065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:46.809 [2024-12-07 10:34:45.963082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:46.809 [2024-12-07 10:34:45.963092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:46.809 [2024-12-07 10:34:45.963105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:46.809 [2024-12-07 10:34:45.963115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:46.809 [2024-12-07 10:34:45.963128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:46.809 [2024-12-07 10:34:45.963140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:46.809 [2024-12-07 10:34:45.963158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.809 [2024-12-07 10:34:45.963171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:46.809 [2024-12-07 10:34:45.963185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:46.809 [2024-12-07 10:34:45.963196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:46.809 [2024-12-07 10:34:45.963209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:46.809 [2024-12-07 10:34:45.963221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.809 [2024-12-07 10:34:45.963234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:46.809 [2024-12-07 10:34:45.963245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:22:46.809 [2024-12-07 10:34:45.963258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.809 [2024-12-07 10:34:45.963404] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:46.809 [2024-12-07 10:34:45.963422] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:51.008 [2024-12-07 10:34:49.729965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.730038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:51.008 [2024-12-07 10:34:49.730055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3772.675 ms 00:22:51.008 [2024-12-07 10:34:49.730068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.766603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.766660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:51.008 [2024-12-07 10:34:49.766676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.266 ms 00:22:51.008 [2024-12-07 10:34:49.766690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.766852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.766870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:51.008 [2024-12-07 10:34:49.766902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:51.008 [2024-12-07 10:34:49.766919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.824384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.824434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:51.008 [2024-12-07 10:34:49.824448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.501 ms 00:22:51.008 [2024-12-07 10:34:49.824461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.824589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.824604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:51.008 [2024-12-07 10:34:49.824616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:51.008 [2024-12-07 10:34:49.824627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.825137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.825159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:51.008 [2024-12-07 10:34:49.825171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:22:51.008 [2024-12-07 10:34:49.825184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.825314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.825329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:51.008 [2024-12-07 10:34:49.825356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:51.008 [2024-12-07 10:34:49.825373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.846440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.846484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:51.008 [2024-12-07 10:34:49.846498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.043 ms 00:22:51.008 [2024-12-07 10:34:49.846512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.858828] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:51.008 [2024-12-07 10:34:49.875613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.875660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:51.008 [2024-12-07 10:34:49.875677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.989 ms 00:22:51.008 [2024-12-07 10:34:49.875688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.982478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.982712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:51.008 [2024-12-07 10:34:49.982751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.833 ms 00:22:51.008 [2024-12-07 10:34:49.982763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:49.983044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:49.983061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:51.008 [2024-12-07 10:34:49.983080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:22:51.008 [2024-12-07 10:34:49.983091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.020173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.020213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:51.008 [2024-12-07 10:34:50.020231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.074 ms 00:22:51.008 [2024-12-07 10:34:50.020242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.056436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.056619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:51.008 [2024-12-07 10:34:50.056647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.130 ms 00:22:51.008 [2024-12-07 10:34:50.056658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.057464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.057491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:51.008 [2024-12-07 10:34:50.057506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:22:51.008 [2024-12-07 10:34:50.057516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.169236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.169283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:51.008 [2024-12-07 10:34:50.169304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.837 ms 00:22:51.008 [2024-12-07 10:34:50.169315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.208476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.208517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:51.008 [2024-12-07 10:34:50.208534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.068 ms 00:22:51.008 [2024-12-07 10:34:50.208545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.244277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.244316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:51.008 [2024-12-07 10:34:50.244333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.676 ms 00:22:51.008 [2024-12-07 10:34:50.244343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.280084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.008 [2024-12-07 10:34:50.280135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:51.008 [2024-12-07 10:34:50.280152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.668 ms 00:22:51.008 [2024-12-07 10:34:50.280162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.008 [2024-12-07 10:34:50.280275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.009 [2024-12-07 10:34:50.280291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:51.009 [2024-12-07 10:34:50.280308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:51.009 [2024-12-07 10:34:50.280318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.009 [2024-12-07 10:34:50.280426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.009 [2024-12-07 10:34:50.280438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:51.009 [2024-12-07 10:34:50.280451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:51.009 [2024-12-07 10:34:50.280461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.009 [2024-12-07 10:34:50.281528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:51.009 [2024-12-07 10:34:50.285811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4346.931 ms, result 0 00:22:51.009 [2024-12-07 10:34:50.286927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:51.009 { 00:22:51.009 "name": "ftl0", 00:22:51.009 "uuid": "06a87537-5a92-450d-8735-ed5d8c4b9fb5" 00:22:51.009 } 00:22:51.009 10:34:50 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:51.009 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:51.009 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:51.009 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:51.009 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:51.009 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:51.009 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:51.268 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:51.527 [ 00:22:51.527 { 00:22:51.527 "name": "ftl0", 00:22:51.527 "aliases": [ 00:22:51.527 "06a87537-5a92-450d-8735-ed5d8c4b9fb5" 00:22:51.527 ], 00:22:51.527 "product_name": "FTL disk", 00:22:51.527 "block_size": 4096, 00:22:51.527 "num_blocks": 23592960, 00:22:51.527 "uuid": "06a87537-5a92-450d-8735-ed5d8c4b9fb5", 00:22:51.527 "assigned_rate_limits": { 00:22:51.527 "rw_ios_per_sec": 0, 00:22:51.527 "rw_mbytes_per_sec": 0, 00:22:51.527 "r_mbytes_per_sec": 0, 00:22:51.527 "w_mbytes_per_sec": 0 00:22:51.527 }, 00:22:51.527 "claimed": false, 00:22:51.527 "zoned": false, 00:22:51.527 "supported_io_types": { 00:22:51.527 "read": true, 00:22:51.527 "write": true, 00:22:51.527 "unmap": true, 00:22:51.527 "flush": true, 00:22:51.527 "reset": false, 00:22:51.527 "nvme_admin": false, 00:22:51.527 "nvme_io": false, 00:22:51.527 "nvme_io_md": false, 00:22:51.527 "write_zeroes": true, 00:22:51.527 "zcopy": false, 00:22:51.527 "get_zone_info": false, 00:22:51.527 "zone_management": false, 00:22:51.527 "zone_append": false, 00:22:51.527 "compare": false, 00:22:51.527 "compare_and_write": false, 00:22:51.527 "abort": false, 00:22:51.527 "seek_hole": false, 00:22:51.527 "seek_data": false, 00:22:51.527 "copy": false, 00:22:51.527 "nvme_iov_md": false 00:22:51.527 }, 00:22:51.527 "driver_specific": { 00:22:51.527 "ftl": { 00:22:51.527 "base_bdev": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:51.527 "cache": "nvc0n1p0" 00:22:51.527 } 00:22:51.527 } 00:22:51.527 } 00:22:51.527 ] 00:22:51.527 10:34:50 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:51.527 10:34:50 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:51.527 10:34:50 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:51.786 10:34:50 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:51.786 10:34:50 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:52.045 10:34:51 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:52.045 { 00:22:52.045 "name": "ftl0", 00:22:52.045 "aliases": [ 00:22:52.045 "06a87537-5a92-450d-8735-ed5d8c4b9fb5" 00:22:52.045 ], 00:22:52.045 "product_name": "FTL disk", 00:22:52.045 "block_size": 4096, 00:22:52.045 "num_blocks": 23592960, 00:22:52.045 "uuid": "06a87537-5a92-450d-8735-ed5d8c4b9fb5", 00:22:52.045 "assigned_rate_limits": { 00:22:52.045 "rw_ios_per_sec": 0, 00:22:52.045 "rw_mbytes_per_sec": 0, 00:22:52.045 "r_mbytes_per_sec": 0, 00:22:52.045 "w_mbytes_per_sec": 0 00:22:52.045 }, 00:22:52.045 "claimed": false, 00:22:52.045 "zoned": false, 00:22:52.045 "supported_io_types": { 00:22:52.045 "read": true, 00:22:52.045 "write": true, 00:22:52.045 "unmap": true, 00:22:52.045 "flush": true, 00:22:52.045 "reset": false, 00:22:52.045 "nvme_admin": false, 00:22:52.045 "nvme_io": false, 00:22:52.046 "nvme_io_md": false, 00:22:52.046 "write_zeroes": true, 00:22:52.046 "zcopy": false, 00:22:52.046 "get_zone_info": false, 00:22:52.046 "zone_management": false, 00:22:52.046 "zone_append": false, 00:22:52.046 "compare": false, 00:22:52.046 "compare_and_write": false, 00:22:52.046 "abort": false, 00:22:52.046 "seek_hole": false, 00:22:52.046 "seek_data": false, 00:22:52.046 "copy": false, 00:22:52.046 "nvme_iov_md": false 00:22:52.046 }, 00:22:52.046 "driver_specific": { 00:22:52.046 "ftl": { 00:22:52.046 "base_bdev": "1af2335e-1fbd-4495-a9f1-68aef7e1c6d4", 00:22:52.046 "cache": "nvc0n1p0" 00:22:52.046 } 00:22:52.046 } 00:22:52.046 } 00:22:52.046 ]' 00:22:52.046 10:34:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:52.046 10:34:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:52.046 10:34:51 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:52.046 [2024-12-07 10:34:51.373012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.046 [2024-12-07 10:34:51.373063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:52.046 [2024-12-07 10:34:51.373083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:52.046 [2024-12-07 10:34:51.373100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.046 [2024-12-07 10:34:51.373167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:52.046 [2024-12-07 10:34:51.377328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.046 [2024-12-07 10:34:51.377362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:52.046 [2024-12-07 10:34:51.377380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.144 ms 00:22:52.046 [2024-12-07 10:34:51.377391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.046 [2024-12-07 10:34:51.378471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.046 [2024-12-07 10:34:51.378499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:52.046 [2024-12-07 10:34:51.378514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:22:52.046 [2024-12-07 10:34:51.378524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.046 [2024-12-07 10:34:51.381295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.046 [2024-12-07 10:34:51.381324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:52.046 [2024-12-07 10:34:51.381339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:22:52.046 [2024-12-07 10:34:51.381349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.046 [2024-12-07 10:34:51.386959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.046 [2024-12-07 10:34:51.387151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:52.046 [2024-12-07 10:34:51.387179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.531 ms 00:22:52.046 [2024-12-07 10:34:51.387190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.424343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.424381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:52.307 [2024-12-07 10:34:51.424401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.070 ms 00:22:52.307 [2024-12-07 10:34:51.424411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.446477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.446657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:52.307 [2024-12-07 10:34:51.446685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.987 ms 00:22:52.307 [2024-12-07 10:34:51.446700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.447076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.447093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:52.307 [2024-12-07 10:34:51.447109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:22:52.307 [2024-12-07 10:34:51.447122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.482600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.482637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:52.307 [2024-12-07 10:34:51.482654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.484 ms 00:22:52.307 [2024-12-07 10:34:51.482664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.517920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.518096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:52.307 [2024-12-07 10:34:51.518137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.207 ms 00:22:52.307 [2024-12-07 10:34:51.518148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.552809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.552970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:52.307 [2024-12-07 10:34:51.553005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.545 ms 00:22:52.307 [2024-12-07 10:34:51.553016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.587689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.307 [2024-12-07 10:34:51.587725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:52.307 [2024-12-07 10:34:51.587741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.465 ms 00:22:52.307 [2024-12-07 10:34:51.587751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.307 [2024-12-07 10:34:51.587869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:52.307 [2024-12-07 10:34:51.587888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.587904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.587915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.587928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.587940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.587956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.587967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:52.307 [2024-12-07 10:34:51.588901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.588984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:52.308 [2024-12-07 10:34:51.589198] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:52.308 [2024-12-07 10:34:51.589212] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:22:52.308 [2024-12-07 10:34:51.589224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:52.308 [2024-12-07 10:34:51.589237] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:52.308 [2024-12-07 10:34:51.589246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:52.308 [2024-12-07 10:34:51.589263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:52.308 [2024-12-07 10:34:51.589272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:52.308 [2024-12-07 10:34:51.589284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:52.308 [2024-12-07 10:34:51.589294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:52.308 [2024-12-07 10:34:51.589306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:52.308 [2024-12-07 10:34:51.589315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:52.308 [2024-12-07 10:34:51.589328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.308 [2024-12-07 10:34:51.589339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:52.308 [2024-12-07 10:34:51.589353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:22:52.308 [2024-12-07 10:34:51.589362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.308 [2024-12-07 10:34:51.609047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.308 [2024-12-07 10:34:51.609082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:52.308 [2024-12-07 10:34:51.609100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.664 ms 00:22:52.308 [2024-12-07 10:34:51.609110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.308 [2024-12-07 10:34:51.609663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.308 [2024-12-07 10:34:51.609681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:52.308 [2024-12-07 10:34:51.609694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:22:52.308 [2024-12-07 10:34:51.609704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.567 [2024-12-07 10:34:51.677411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.567 [2024-12-07 10:34:51.677581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:52.567 [2024-12-07 10:34:51.677606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.567 [2024-12-07 10:34:51.677617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.567 [2024-12-07 10:34:51.677769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.567 [2024-12-07 10:34:51.677783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:52.567 [2024-12-07 10:34:51.677796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.567 [2024-12-07 10:34:51.677807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.567 [2024-12-07 10:34:51.677911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.567 [2024-12-07 10:34:51.677925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:52.567 [2024-12-07 10:34:51.677944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.567 [2024-12-07 10:34:51.677954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.567 [2024-12-07 10:34:51.678036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.567 [2024-12-07 10:34:51.678049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:52.567 [2024-12-07 10:34:51.678062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.567 [2024-12-07 10:34:51.678072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.567 [2024-12-07 10:34:51.804339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.567 [2024-12-07 10:34:51.804398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:52.567 [2024-12-07 10:34:51.804416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.567 [2024-12-07 10:34:51.804427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.567 [2024-12-07 10:34:51.901913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.901966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:52.568 [2024-12-07 10:34:51.902002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.902190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.902204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:52.568 [2024-12-07 10:34:51.902221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.902336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.902349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:52.568 [2024-12-07 10:34:51.902362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.902536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.902558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:52.568 [2024-12-07 10:34:51.902571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.902669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.902682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:52.568 [2024-12-07 10:34:51.902695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.902786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.902799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:52.568 [2024-12-07 10:34:51.902814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.902907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.568 [2024-12-07 10:34:51.902920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:52.568 [2024-12-07 10:34:51.902933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.568 [2024-12-07 10:34:51.902942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.568 [2024-12-07 10:34:51.903200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.052 ms, result 0 00:22:52.568 true 00:22:52.827 10:34:51 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78191 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78191 ']' 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78191 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78191 00:22:52.827 killing process with pid 78191 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78191' 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78191 00:22:52.827 10:34:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78191 00:22:58.107 10:34:56 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:58.676 65536+0 records in 00:22:58.676 65536+0 records out 00:22:58.676 268435456 bytes (268 MB, 256 MiB) copied, 0.955321 s, 281 MB/s 00:22:58.676 10:34:57 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.676 [2024-12-07 10:34:57.878030] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:58.676 [2024-12-07 10:34:57.878142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78396 ] 00:22:58.934 [2024-12-07 10:34:58.056177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.934 [2024-12-07 10:34:58.161686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.193 [2024-12-07 10:34:58.519088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.193 [2024-12-07 10:34:58.519161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.454 [2024-12-07 10:34:58.681485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.681733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:59.454 [2024-12-07 10:34:58.681757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:59.454 [2024-12-07 10:34:58.681768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.684975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.685026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.454 [2024-12-07 10:34:58.685039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:22:59.454 [2024-12-07 10:34:58.685049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.685155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:59.454 [2024-12-07 10:34:58.686151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:59.454 [2024-12-07 10:34:58.686188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.686199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.454 [2024-12-07 10:34:58.686210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:22:59.454 [2024-12-07 10:34:58.686220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.687734] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:59.454 [2024-12-07 10:34:58.706858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.706898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:59.454 [2024-12-07 10:34:58.706913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.155 ms 00:22:59.454 [2024-12-07 10:34:58.706923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.707052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.707081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:59.454 [2024-12-07 10:34:58.707092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:59.454 [2024-12-07 10:34:58.707102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.714038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.714203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.454 [2024-12-07 10:34:58.714224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.906 ms 00:22:59.454 [2024-12-07 10:34:58.714234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.714341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.714355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.454 [2024-12-07 10:34:58.714367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:59.454 [2024-12-07 10:34:58.714376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.714408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.714420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:59.454 [2024-12-07 10:34:58.714431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:59.454 [2024-12-07 10:34:58.714441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.714464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:59.454 [2024-12-07 10:34:58.719171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.719204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.454 [2024-12-07 10:34:58.719215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:22:59.454 [2024-12-07 10:34:58.719225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.719294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.719306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:59.454 [2024-12-07 10:34:58.719316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:59.454 [2024-12-07 10:34:58.719327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.719350] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:59.454 [2024-12-07 10:34:58.719373] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:59.454 [2024-12-07 10:34:58.719406] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:59.454 [2024-12-07 10:34:58.719422] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:59.454 [2024-12-07 10:34:58.719513] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:59.454 [2024-12-07 10:34:58.719527] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:59.454 [2024-12-07 10:34:58.719540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:59.454 [2024-12-07 10:34:58.719555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:59.454 [2024-12-07 10:34:58.719567] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:59.454 [2024-12-07 10:34:58.719578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:59.454 [2024-12-07 10:34:58.719587] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:59.454 [2024-12-07 10:34:58.719597] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:59.454 [2024-12-07 10:34:58.719607] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:59.454 [2024-12-07 10:34:58.719618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.719628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:59.454 [2024-12-07 10:34:58.719639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:22:59.454 [2024-12-07 10:34:58.719648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.719719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.454 [2024-12-07 10:34:58.719733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:59.454 [2024-12-07 10:34:58.719744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:59.454 [2024-12-07 10:34:58.719754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.454 [2024-12-07 10:34:58.719839] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:59.454 [2024-12-07 10:34:58.719852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:59.454 [2024-12-07 10:34:58.719863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.454 [2024-12-07 10:34:58.719873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.454 [2024-12-07 10:34:58.719883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:59.454 [2024-12-07 10:34:58.719892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:59.454 [2024-12-07 10:34:58.719901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:59.455 [2024-12-07 10:34:58.719912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:59.455 [2024-12-07 10:34:58.719921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:59.455 [2024-12-07 10:34:58.719931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.455 [2024-12-07 10:34:58.719941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:59.455 [2024-12-07 10:34:58.719960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:59.455 [2024-12-07 10:34:58.719969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.455 [2024-12-07 10:34:58.719995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:59.455 [2024-12-07 10:34:58.720005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:59.455 [2024-12-07 10:34:58.720013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:59.455 [2024-12-07 10:34:58.720031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:59.455 [2024-12-07 10:34:58.720059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:59.455 [2024-12-07 10:34:58.720085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:59.455 [2024-12-07 10:34:58.720109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:59.455 [2024-12-07 10:34:58.720134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:59.455 [2024-12-07 10:34:58.720159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.455 [2024-12-07 10:34:58.720177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:59.455 [2024-12-07 10:34:58.720185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:59.455 [2024-12-07 10:34:58.720193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.455 [2024-12-07 10:34:58.720209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:59.455 [2024-12-07 10:34:58.720218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:59.455 [2024-12-07 10:34:58.720226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:59.455 [2024-12-07 10:34:58.720243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:59.455 [2024-12-07 10:34:58.720252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720260] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:59.455 [2024-12-07 10:34:58.720270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:59.455 [2024-12-07 10:34:58.720282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.455 [2024-12-07 10:34:58.720300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:59.455 [2024-12-07 10:34:58.720309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:59.455 [2024-12-07 10:34:58.720318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:59.455 [2024-12-07 10:34:58.720326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:59.455 [2024-12-07 10:34:58.720334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:59.455 [2024-12-07 10:34:58.720343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:59.455 [2024-12-07 10:34:58.720354] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:59.455 [2024-12-07 10:34:58.720372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:59.455 [2024-12-07 10:34:58.720393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:59.455 [2024-12-07 10:34:58.720402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:59.455 [2024-12-07 10:34:58.720411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:59.455 [2024-12-07 10:34:58.720421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:59.455 [2024-12-07 10:34:58.720430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:59.455 [2024-12-07 10:34:58.720440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:59.455 [2024-12-07 10:34:58.720450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:59.455 [2024-12-07 10:34:58.720475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:59.455 [2024-12-07 10:34:58.720485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:59.455 [2024-12-07 10:34:58.720533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:59.455 [2024-12-07 10:34:58.720545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:59.455 [2024-12-07 10:34:58.720565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:59.455 [2024-12-07 10:34:58.720575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:59.455 [2024-12-07 10:34:58.720587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:59.455 [2024-12-07 10:34:58.720598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.455 [2024-12-07 10:34:58.720612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:59.455 [2024-12-07 10:34:58.720622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:22:59.455 [2024-12-07 10:34:58.720631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.455 [2024-12-07 10:34:58.756595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.455 [2024-12-07 10:34:58.756632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.455 [2024-12-07 10:34:58.756645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.962 ms 00:22:59.455 [2024-12-07 10:34:58.756655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.455 [2024-12-07 10:34:58.756770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.455 [2024-12-07 10:34:58.756783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:59.455 [2024-12-07 10:34:58.756794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:59.455 [2024-12-07 10:34:58.756803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.830481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.830519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.716 [2024-12-07 10:34:58.830537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.775 ms 00:22:59.716 [2024-12-07 10:34:58.830555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.830655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.830668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.716 [2024-12-07 10:34:58.830681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:59.716 [2024-12-07 10:34:58.830691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.831154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.831170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.716 [2024-12-07 10:34:58.831186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:22:59.716 [2024-12-07 10:34:58.831196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.831309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.831323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.716 [2024-12-07 10:34:58.831334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:59.716 [2024-12-07 10:34:58.831344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.850264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.850297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.716 [2024-12-07 10:34:58.850310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.913 ms 00:22:59.716 [2024-12-07 10:34:58.850321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.868736] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:59.716 [2024-12-07 10:34:58.868950] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:59.716 [2024-12-07 10:34:58.868972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.868999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:59.716 [2024-12-07 10:34:58.869011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:22:59.716 [2024-12-07 10:34:58.869023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.897235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.897293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:59.716 [2024-12-07 10:34:58.897308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.172 ms 00:22:59.716 [2024-12-07 10:34:58.897318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.914589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.914741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:59.716 [2024-12-07 10:34:58.914760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.219 ms 00:22:59.716 [2024-12-07 10:34:58.914771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.932118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.932154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:59.716 [2024-12-07 10:34:58.932167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.279 ms 00:22:59.716 [2024-12-07 10:34:58.932176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:58.932881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:58.932906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:59.716 [2024-12-07 10:34:58.932918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:22:59.716 [2024-12-07 10:34:58.932927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:59.015283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:59.015349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:59.716 [2024-12-07 10:34:59.015364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.463 ms 00:22:59.716 [2024-12-07 10:34:59.015375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:59.025535] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:59.716 [2024-12-07 10:34:59.041019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:59.041061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:59.716 [2024-12-07 10:34:59.041077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.599 ms 00:22:59.716 [2024-12-07 10:34:59.041088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:59.041203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:59.041217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:59.716 [2024-12-07 10:34:59.041229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:59.716 [2024-12-07 10:34:59.041238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:59.041291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:59.041303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:59.716 [2024-12-07 10:34:59.041313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:59.716 [2024-12-07 10:34:59.041324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:59.041359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:59.041376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:59.716 [2024-12-07 10:34:59.041386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:59.716 [2024-12-07 10:34:59.041396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.716 [2024-12-07 10:34:59.041432] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:59.716 [2024-12-07 10:34:59.041443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.716 [2024-12-07 10:34:59.041453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:59.716 [2024-12-07 10:34:59.041464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:59.716 [2024-12-07 10:34:59.041473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-12-07 10:34:59.076863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-12-07 10:34:59.076904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:59.975 [2024-12-07 10:34:59.076918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.425 ms 00:22:59.976 [2024-12-07 10:34:59.076929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-12-07 10:34:59.077048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.976 [2024-12-07 10:34:59.077063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:59.976 [2024-12-07 10:34:59.077074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:59.976 [2024-12-07 10:34:59.077084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-12-07 10:34:59.078026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:59.976 [2024-12-07 10:34:59.082124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.844 ms, result 0 00:22:59.976 [2024-12-07 10:34:59.083193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:59.976 [2024-12-07 10:34:59.101304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.951  [2024-12-07T10:35:01.321Z] Copying: 21/256 [MB] (21 MBps) [2024-12-07T10:35:02.257Z] Copying: 43/256 [MB] (22 MBps) [2024-12-07T10:35:03.196Z] Copying: 66/256 [MB] (22 MBps) [2024-12-07T10:35:04.134Z] Copying: 88/256 [MB] (22 MBps) [2024-12-07T10:35:05.513Z] Copying: 112/256 [MB] (23 MBps) [2024-12-07T10:35:06.448Z] Copying: 135/256 [MB] (23 MBps) [2024-12-07T10:35:07.386Z] Copying: 157/256 [MB] (21 MBps) [2024-12-07T10:35:08.324Z] Copying: 179/256 [MB] (21 MBps) [2024-12-07T10:35:09.261Z] Copying: 202/256 [MB] (23 MBps) [2024-12-07T10:35:10.198Z] Copying: 225/256 [MB] (22 MBps) [2024-12-07T10:35:10.458Z] Copying: 249/256 [MB] (24 MBps) [2024-12-07T10:35:10.458Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-07 10:35:10.352004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:11.105 [2024-12-07 10:35:10.366067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.366222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.105 [2024-12-07 10:35:10.366247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:11.105 [2024-12-07 10:35:10.366265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.366298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:11.105 [2024-12-07 10:35:10.370100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.370132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.105 [2024-12-07 10:35:10.370144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.792 ms 00:23:11.105 [2024-12-07 10:35:10.370153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.372105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.372142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.105 [2024-12-07 10:35:10.372155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.930 ms 00:23:11.105 [2024-12-07 10:35:10.372165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.378685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.378728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.105 [2024-12-07 10:35:10.378740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.511 ms 00:23:11.105 [2024-12-07 10:35:10.378750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.384125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.384265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.105 [2024-12-07 10:35:10.384300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:23:11.105 [2024-12-07 10:35:10.384311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.418372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.418526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.105 [2024-12-07 10:35:10.418554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.067 ms 00:23:11.105 [2024-12-07 10:35:10.418564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.438993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.439036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.105 [2024-12-07 10:35:10.439052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.355 ms 00:23:11.105 [2024-12-07 10:35:10.439062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.105 [2024-12-07 10:35:10.439190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.105 [2024-12-07 10:35:10.439204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.105 [2024-12-07 10:35:10.439215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:11.105 [2024-12-07 10:35:10.439234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.365 [2024-12-07 10:35:10.474394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.365 [2024-12-07 10:35:10.474429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:11.365 [2024-12-07 10:35:10.474442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.199 ms 00:23:11.365 [2024-12-07 10:35:10.474452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.365 [2024-12-07 10:35:10.509010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.365 [2024-12-07 10:35:10.509044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.366 [2024-12-07 10:35:10.509056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.560 ms 00:23:11.366 [2024-12-07 10:35:10.509065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.366 [2024-12-07 10:35:10.542437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.366 [2024-12-07 10:35:10.542472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.366 [2024-12-07 10:35:10.542485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.373 ms 00:23:11.366 [2024-12-07 10:35:10.542494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.366 [2024-12-07 10:35:10.576007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.366 [2024-12-07 10:35:10.576041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.366 [2024-12-07 10:35:10.576054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.481 ms 00:23:11.366 [2024-12-07 10:35:10.576062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.366 [2024-12-07 10:35:10.576114] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.366 [2024-12-07 10:35:10.576130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.366 [2024-12-07 10:35:10.576879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.576998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.367 [2024-12-07 10:35:10.577145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.367 [2024-12-07 10:35:10.577155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:23:11.367 [2024-12-07 10:35:10.577165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.367 [2024-12-07 10:35:10.577173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.367 [2024-12-07 10:35:10.577183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.367 [2024-12-07 10:35:10.577193] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.367 [2024-12-07 10:35:10.577202] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.367 [2024-12-07 10:35:10.577211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.367 [2024-12-07 10:35:10.577220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.367 [2024-12-07 10:35:10.577229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.367 [2024-12-07 10:35:10.577237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.367 [2024-12-07 10:35:10.577245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.367 [2024-12-07 10:35:10.577270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.367 [2024-12-07 10:35:10.577280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:23:11.367 [2024-12-07 10:35:10.577289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.367 [2024-12-07 10:35:10.596421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.367 [2024-12-07 10:35:10.596453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.367 [2024-12-07 10:35:10.596465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.144 ms 00:23:11.367 [2024-12-07 10:35:10.596475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.367 [2024-12-07 10:35:10.597049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.367 [2024-12-07 10:35:10.597062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.367 [2024-12-07 10:35:10.597073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:23:11.367 [2024-12-07 10:35:10.597082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.367 [2024-12-07 10:35:10.648737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.367 [2024-12-07 10:35:10.648772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.367 [2024-12-07 10:35:10.648784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.367 [2024-12-07 10:35:10.648794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.367 [2024-12-07 10:35:10.648883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.367 [2024-12-07 10:35:10.648895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.367 [2024-12-07 10:35:10.648905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.367 [2024-12-07 10:35:10.648914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.367 [2024-12-07 10:35:10.648964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.367 [2024-12-07 10:35:10.648996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.367 [2024-12-07 10:35:10.649022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.367 [2024-12-07 10:35:10.649033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.367 [2024-12-07 10:35:10.649052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.367 [2024-12-07 10:35:10.649068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.367 [2024-12-07 10:35:10.649078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.367 [2024-12-07 10:35:10.649088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.765390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.765441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.627 [2024-12-07 10:35:10.765456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.765466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.861462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.861711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.627 [2024-12-07 10:35:10.861735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.861746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.861812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.861824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.627 [2024-12-07 10:35:10.861835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.861846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.861874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.861887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.627 [2024-12-07 10:35:10.861903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.861914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.862039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.862055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.627 [2024-12-07 10:35:10.862067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.862077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.862119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.862132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.627 [2024-12-07 10:35:10.862143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.862157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.862197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.862209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.627 [2024-12-07 10:35:10.862219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.862229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.862272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.627 [2024-12-07 10:35:10.862285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.627 [2024-12-07 10:35:10.862299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.627 [2024-12-07 10:35:10.862325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.627 [2024-12-07 10:35:10.862466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.196 ms, result 0 00:23:13.006 00:23:13.006 00:23:13.006 10:35:12 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78537 00:23:13.006 10:35:12 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:13.006 10:35:12 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78537 00:23:13.007 10:35:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78537 ']' 00:23:13.007 10:35:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.007 10:35:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.007 10:35:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.007 10:35:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.007 10:35:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:13.007 [2024-12-07 10:35:12.170365] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:13.007 [2024-12-07 10:35:12.170491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78537 ] 00:23:13.007 [2024-12-07 10:35:12.350948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.266 [2024-12-07 10:35:12.449401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.210 10:35:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.210 10:35:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:14.210 10:35:13 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:14.210 [2024-12-07 10:35:13.459675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.210 [2024-12-07 10:35:13.459740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.470 [2024-12-07 10:35:13.641985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.642031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.470 [2024-12-07 10:35:13.642051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:14.470 [2024-12-07 10:35:13.642061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.645659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.645696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.470 [2024-12-07 10:35:13.645712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:23:14.470 [2024-12-07 10:35:13.645722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.645842] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.470 [2024-12-07 10:35:13.646795] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.470 [2024-12-07 10:35:13.646832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.646844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.470 [2024-12-07 10:35:13.646857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:23:14.470 [2024-12-07 10:35:13.646867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.648428] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:14.470 [2024-12-07 10:35:13.667152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.667200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:14.470 [2024-12-07 10:35:13.667215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.759 ms 00:23:14.470 [2024-12-07 10:35:13.667230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.667333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.667352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:14.470 [2024-12-07 10:35:13.667364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:14.470 [2024-12-07 10:35:13.667378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.674208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.674247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.470 [2024-12-07 10:35:13.674259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.787 ms 00:23:14.470 [2024-12-07 10:35:13.674274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.674403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.674424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.470 [2024-12-07 10:35:13.674435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:14.470 [2024-12-07 10:35:13.674460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.674486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.674502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.470 [2024-12-07 10:35:13.674512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:14.470 [2024-12-07 10:35:13.674527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.674558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:14.470 [2024-12-07 10:35:13.679297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.679328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.470 [2024-12-07 10:35:13.679344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.747 ms 00:23:14.470 [2024-12-07 10:35:13.679354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.679431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.470 [2024-12-07 10:35:13.679444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.470 [2024-12-07 10:35:13.679460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:14.470 [2024-12-07 10:35:13.679474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.470 [2024-12-07 10:35:13.679501] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:14.470 [2024-12-07 10:35:13.679528] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:14.470 [2024-12-07 10:35:13.679576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:14.470 [2024-12-07 10:35:13.679595] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:14.470 [2024-12-07 10:35:13.679684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.470 [2024-12-07 10:35:13.679697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.470 [2024-12-07 10:35:13.679720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:14.470 [2024-12-07 10:35:13.679733] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.470 [2024-12-07 10:35:13.679750] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.470 [2024-12-07 10:35:13.679761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:14.470 [2024-12-07 10:35:13.679775] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.471 [2024-12-07 10:35:13.679785] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.471 [2024-12-07 10:35:13.679804] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.471 [2024-12-07 10:35:13.679815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.679830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.471 [2024-12-07 10:35:13.679840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:23:14.471 [2024-12-07 10:35:13.679856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.679929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.679947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.471 [2024-12-07 10:35:13.679957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:14.471 [2024-12-07 10:35:13.679971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.680072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.471 [2024-12-07 10:35:13.680092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.471 [2024-12-07 10:35:13.680103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.471 [2024-12-07 10:35:13.680143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.471 [2024-12-07 10:35:13.680183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.471 [2024-12-07 10:35:13.680207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.471 [2024-12-07 10:35:13.680221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:14.471 [2024-12-07 10:35:13.680230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.471 [2024-12-07 10:35:13.680243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.471 [2024-12-07 10:35:13.680253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:14.471 [2024-12-07 10:35:13.680266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.471 [2024-12-07 10:35:13.680289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.471 [2024-12-07 10:35:13.680332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.471 [2024-12-07 10:35:13.680373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.471 [2024-12-07 10:35:13.680405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.471 [2024-12-07 10:35:13.680441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.471 [2024-12-07 10:35:13.680474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.471 [2024-12-07 10:35:13.680496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.471 [2024-12-07 10:35:13.680509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:14.471 [2024-12-07 10:35:13.680519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.471 [2024-12-07 10:35:13.680531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.471 [2024-12-07 10:35:13.680541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:14.471 [2024-12-07 10:35:13.680559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.471 [2024-12-07 10:35:13.680583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:14.471 [2024-12-07 10:35:13.680593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.471 [2024-12-07 10:35:13.680622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.471 [2024-12-07 10:35:13.680636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.471 [2024-12-07 10:35:13.680660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.471 [2024-12-07 10:35:13.680670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.471 [2024-12-07 10:35:13.680683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.471 [2024-12-07 10:35:13.680693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.471 [2024-12-07 10:35:13.680704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.471 [2024-12-07 10:35:13.680714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.471 [2024-12-07 10:35:13.680732] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.471 [2024-12-07 10:35:13.680744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:14.471 [2024-12-07 10:35:13.680771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:14.471 [2024-12-07 10:35:13.680783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:14.471 [2024-12-07 10:35:13.680792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:14.471 [2024-12-07 10:35:13.680805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:14.471 [2024-12-07 10:35:13.680815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:14.471 [2024-12-07 10:35:13.680827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:14.471 [2024-12-07 10:35:13.680837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:14.471 [2024-12-07 10:35:13.680849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:14.471 [2024-12-07 10:35:13.680859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:14.471 [2024-12-07 10:35:13.680914] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.471 [2024-12-07 10:35:13.680925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.471 [2024-12-07 10:35:13.680950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.471 [2024-12-07 10:35:13.680963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.471 [2024-12-07 10:35:13.680972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.471 [2024-12-07 10:35:13.680999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.681009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.471 [2024-12-07 10:35:13.681021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:23:14.471 [2024-12-07 10:35:13.681034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.720177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.720211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.471 [2024-12-07 10:35:13.720227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.149 ms 00:23:14.471 [2024-12-07 10:35:13.720241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.720349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.720362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.471 [2024-12-07 10:35:13.720376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:14.471 [2024-12-07 10:35:13.720386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.766885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.766924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.471 [2024-12-07 10:35:13.766942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.545 ms 00:23:14.471 [2024-12-07 10:35:13.766953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.767049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.767062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.471 [2024-12-07 10:35:13.767078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:14.471 [2024-12-07 10:35:13.767089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.767565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.767591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.471 [2024-12-07 10:35:13.767609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:23:14.471 [2024-12-07 10:35:13.767620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.767746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.767760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.471 [2024-12-07 10:35:13.767776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:14.471 [2024-12-07 10:35:13.767787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.471 [2024-12-07 10:35:13.787898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.471 [2024-12-07 10:35:13.788130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.471 [2024-12-07 10:35:13.788173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.113 ms 00:23:14.471 [2024-12-07 10:35:13.788185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.730 [2024-12-07 10:35:13.838301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:14.730 [2024-12-07 10:35:13.838344] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:14.730 [2024-12-07 10:35:13.838366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.730 [2024-12-07 10:35:13.838378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:14.730 [2024-12-07 10:35:13.838395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.145 ms 00:23:14.730 [2024-12-07 10:35:13.838419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.730 [2024-12-07 10:35:13.866758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:13.866938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:14.731 [2024-12-07 10:35:13.866972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.288 ms 00:23:14.731 [2024-12-07 10:35:13.866997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:13.883967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:13.884010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:14.731 [2024-12-07 10:35:13.884033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.882 ms 00:23:14.731 [2024-12-07 10:35:13.884043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:13.900716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:13.900753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:14.731 [2024-12-07 10:35:13.900771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.617 ms 00:23:14.731 [2024-12-07 10:35:13.900781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:13.901583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:13.901617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:14.731 [2024-12-07 10:35:13.901636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:23:14.731 [2024-12-07 10:35:13.901647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:13.982702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:13.982971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:14.731 [2024-12-07 10:35:13.983017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.147 ms 00:23:14.731 [2024-12-07 10:35:13.983030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:13.993269] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:14.731 [2024-12-07 10:35:14.009027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.009081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:14.731 [2024-12-07 10:35:14.009103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.921 ms 00:23:14.731 [2024-12-07 10:35:14.009118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.009209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.009228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:14.731 [2024-12-07 10:35:14.009241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:14.731 [2024-12-07 10:35:14.009255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.009313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.009330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:14.731 [2024-12-07 10:35:14.009342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:14.731 [2024-12-07 10:35:14.009362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.009386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.009402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:14.731 [2024-12-07 10:35:14.009413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:14.731 [2024-12-07 10:35:14.009427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.009468] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:14.731 [2024-12-07 10:35:14.009489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.009505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:14.731 [2024-12-07 10:35:14.009517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:14.731 [2024-12-07 10:35:14.009527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.043950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.044005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:14.731 [2024-12-07 10:35:14.044022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.448 ms 00:23:14.731 [2024-12-07 10:35:14.044033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.044186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-07 10:35:14.044201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:14.731 [2024-12-07 10:35:14.044214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:14.731 [2024-12-07 10:35:14.044227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-07 10:35:14.045224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.731 [2024-12-07 10:35:14.049316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.597 ms, result 0 00:23:14.731 [2024-12-07 10:35:14.050600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.731 Some configs were skipped because the RPC state that can call them passed over. 00:23:14.989 10:35:14 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:14.989 [2024-12-07 10:35:14.293303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.989 [2024-12-07 10:35:14.293488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:14.989 [2024-12-07 10:35:14.293576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:23:14.989 [2024-12-07 10:35:14.293625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.989 [2024-12-07 10:35:14.293722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.866 ms, result 0 00:23:14.989 true 00:23:14.989 10:35:14 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:15.248 [2024-12-07 10:35:14.492872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.248 [2024-12-07 10:35:14.492911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:15.248 [2024-12-07 10:35:14.492930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:23:15.248 [2024-12-07 10:35:14.492941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.248 [2024-12-07 10:35:14.493002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.260 ms, result 0 00:23:15.248 true 00:23:15.248 10:35:14 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78537 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78537 ']' 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78537 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78537 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.248 killing process with pid 78537 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78537' 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78537 00:23:15.248 10:35:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78537 00:23:16.629 [2024-12-07 10:35:15.623494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.623558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.629 [2024-12-07 10:35:15.623574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:16.629 [2024-12-07 10:35:15.623586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.623612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:16.629 [2024-12-07 10:35:15.627915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.627956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.629 [2024-12-07 10:35:15.627982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:23:16.629 [2024-12-07 10:35:15.627993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.628241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.628255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.629 [2024-12-07 10:35:15.628268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:23:16.629 [2024-12-07 10:35:15.628278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.631537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.631575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.629 [2024-12-07 10:35:15.631594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:23:16.629 [2024-12-07 10:35:15.631605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.637129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.637163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:16.629 [2024-12-07 10:35:15.637179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.491 ms 00:23:16.629 [2024-12-07 10:35:15.637205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.651945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.651998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.629 [2024-12-07 10:35:15.652033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.705 ms 00:23:16.629 [2024-12-07 10:35:15.652043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.662245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.662284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.629 [2024-12-07 10:35:15.662300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.160 ms 00:23:16.629 [2024-12-07 10:35:15.662310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.662442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.662455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.629 [2024-12-07 10:35:15.662468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:16.629 [2024-12-07 10:35:15.662478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.677623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.677776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:16.629 [2024-12-07 10:35:15.677822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.141 ms 00:23:16.629 [2024-12-07 10:35:15.677833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.692437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.692469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:16.629 [2024-12-07 10:35:15.692493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.567 ms 00:23:16.629 [2024-12-07 10:35:15.692503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.706597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.706630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.629 [2024-12-07 10:35:15.706649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.043 ms 00:23:16.629 [2024-12-07 10:35:15.706658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.720940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.629 [2024-12-07 10:35:15.720984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.629 [2024-12-07 10:35:15.721020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.232 ms 00:23:16.629 [2024-12-07 10:35:15.721030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.629 [2024-12-07 10:35:15.721101] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.629 [2024-12-07 10:35:15.721119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.629 [2024-12-07 10:35:15.721421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.721996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.630 [2024-12-07 10:35:15.722587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.630 [2024-12-07 10:35:15.722614] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:23:16.630 [2024-12-07 10:35:15.722631] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:16.630 [2024-12-07 10:35:15.722646] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:16.630 [2024-12-07 10:35:15.722656] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:16.630 [2024-12-07 10:35:15.722671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:16.630 [2024-12-07 10:35:15.722681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.630 [2024-12-07 10:35:15.722697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.630 [2024-12-07 10:35:15.722707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.630 [2024-12-07 10:35:15.722722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.630 [2024-12-07 10:35:15.722731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.630 [2024-12-07 10:35:15.722748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.630 [2024-12-07 10:35:15.722759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.630 [2024-12-07 10:35:15.722775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:23:16.630 [2024-12-07 10:35:15.722785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.742276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.630 [2024-12-07 10:35:15.742313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.630 [2024-12-07 10:35:15.742337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.483 ms 00:23:16.630 [2024-12-07 10:35:15.742348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.742938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.630 [2024-12-07 10:35:15.742956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.630 [2024-12-07 10:35:15.742988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:23:16.630 [2024-12-07 10:35:15.743000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.812093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.630 [2024-12-07 10:35:15.812131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.630 [2024-12-07 10:35:15.812150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.630 [2024-12-07 10:35:15.812177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.812268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.630 [2024-12-07 10:35:15.812281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.630 [2024-12-07 10:35:15.812304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.630 [2024-12-07 10:35:15.812315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.812377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.630 [2024-12-07 10:35:15.812390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.630 [2024-12-07 10:35:15.812410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.630 [2024-12-07 10:35:15.812420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.812445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.630 [2024-12-07 10:35:15.812456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.630 [2024-12-07 10:35:15.812471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.630 [2024-12-07 10:35:15.812485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.630 [2024-12-07 10:35:15.933286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.630 [2024-12-07 10:35:15.933495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.630 [2024-12-07 10:35:15.933528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.630 [2024-12-07 10:35:15.933540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.028488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.028538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.890 [2024-12-07 10:35:16.028558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.028573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.028669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.028680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.890 [2024-12-07 10:35:16.028700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.028711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.028744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.028755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.890 [2024-12-07 10:35:16.028770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.028780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.028908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.028921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.890 [2024-12-07 10:35:16.028935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.028945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.029007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.029037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.890 [2024-12-07 10:35:16.029052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.029062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.029113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.029125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.890 [2024-12-07 10:35:16.029145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.029155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.029206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.890 [2024-12-07 10:35:16.029218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.890 [2024-12-07 10:35:16.029233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.890 [2024-12-07 10:35:16.029243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.890 [2024-12-07 10:35:16.029391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.522 ms, result 0 00:23:17.856 10:35:17 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:17.856 10:35:17 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:17.856 [2024-12-07 10:35:17.106612] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:17.856 [2024-12-07 10:35:17.106724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78605 ] 00:23:18.115 [2024-12-07 10:35:17.283526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.115 [2024-12-07 10:35:17.391409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.681 [2024-12-07 10:35:17.748483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.681 [2024-12-07 10:35:17.748554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.681 [2024-12-07 10:35:17.909418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.909468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.681 [2024-12-07 10:35:17.909484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.681 [2024-12-07 10:35:17.909495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.912598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.912638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.681 [2024-12-07 10:35:17.912651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:23:18.681 [2024-12-07 10:35:17.912676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.912770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.681 [2024-12-07 10:35:17.913821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.681 [2024-12-07 10:35:17.913856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.913868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.681 [2024-12-07 10:35:17.913878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:23:18.681 [2024-12-07 10:35:17.913888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.915421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.681 [2024-12-07 10:35:17.933758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.933795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.681 [2024-12-07 10:35:17.933809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.367 ms 00:23:18.681 [2024-12-07 10:35:17.933818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.933916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.933929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.681 [2024-12-07 10:35:17.933940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:18.681 [2024-12-07 10:35:17.933949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.940887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.940914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.681 [2024-12-07 10:35:17.940926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.893 ms 00:23:18.681 [2024-12-07 10:35:17.940935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.941055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.941070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.681 [2024-12-07 10:35:17.941081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:18.681 [2024-12-07 10:35:17.941091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.941122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.941133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.681 [2024-12-07 10:35:17.941143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:18.681 [2024-12-07 10:35:17.941154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.941175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:18.681 [2024-12-07 10:35:17.945909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.945939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.681 [2024-12-07 10:35:17.945950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:23:18.681 [2024-12-07 10:35:17.945959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.946054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.681 [2024-12-07 10:35:17.946067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.681 [2024-12-07 10:35:17.946077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.681 [2024-12-07 10:35:17.946087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.681 [2024-12-07 10:35:17.946113] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.681 [2024-12-07 10:35:17.946136] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:18.681 [2024-12-07 10:35:17.946171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.681 [2024-12-07 10:35:17.946189] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:18.681 [2024-12-07 10:35:17.946279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.681 [2024-12-07 10:35:17.946291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.681 [2024-12-07 10:35:17.946304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:18.681 [2024-12-07 10:35:17.946328] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.681 [2024-12-07 10:35:17.946340] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.681 [2024-12-07 10:35:17.946351] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:18.681 [2024-12-07 10:35:17.946360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.681 [2024-12-07 10:35:17.946370] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.682 [2024-12-07 10:35:17.946379] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.682 [2024-12-07 10:35:17.946389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.682 [2024-12-07 10:35:17.946399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.682 [2024-12-07 10:35:17.946409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:23:18.682 [2024-12-07 10:35:17.946434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.682 [2024-12-07 10:35:17.946509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.682 [2024-12-07 10:35:17.946524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.682 [2024-12-07 10:35:17.946535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:18.682 [2024-12-07 10:35:17.946553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.682 [2024-12-07 10:35:17.946644] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.682 [2024-12-07 10:35:17.946657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.682 [2024-12-07 10:35:17.946668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.682 [2024-12-07 10:35:17.946698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.682 [2024-12-07 10:35:17.946726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.682 [2024-12-07 10:35:17.946745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.682 [2024-12-07 10:35:17.946766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:18.682 [2024-12-07 10:35:17.946775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.682 [2024-12-07 10:35:17.946785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.682 [2024-12-07 10:35:17.946794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:18.682 [2024-12-07 10:35:17.946804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.682 [2024-12-07 10:35:17.946822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.682 [2024-12-07 10:35:17.946850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.682 [2024-12-07 10:35:17.946878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.682 [2024-12-07 10:35:17.946905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.682 [2024-12-07 10:35:17.946932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.682 [2024-12-07 10:35:17.946950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.682 [2024-12-07 10:35:17.946960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:18.682 [2024-12-07 10:35:17.946968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.682 [2024-12-07 10:35:17.946993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.682 [2024-12-07 10:35:17.947004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:18.682 [2024-12-07 10:35:17.947013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.682 [2024-12-07 10:35:17.947022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.682 [2024-12-07 10:35:17.947031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:18.682 [2024-12-07 10:35:17.947040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.947049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.682 [2024-12-07 10:35:17.947058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:18.682 [2024-12-07 10:35:17.947068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.947078] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.682 [2024-12-07 10:35:17.947089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.682 [2024-12-07 10:35:17.947102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.682 [2024-12-07 10:35:17.947112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.682 [2024-12-07 10:35:17.947122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.682 [2024-12-07 10:35:17.947131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.682 [2024-12-07 10:35:17.947141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.682 [2024-12-07 10:35:17.947151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.682 [2024-12-07 10:35:17.947160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.682 [2024-12-07 10:35:17.947169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.682 [2024-12-07 10:35:17.947180] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.682 [2024-12-07 10:35:17.947193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:18.682 [2024-12-07 10:35:17.947215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:18.682 [2024-12-07 10:35:17.947225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:18.682 [2024-12-07 10:35:17.947235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:18.682 [2024-12-07 10:35:17.947246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:18.682 [2024-12-07 10:35:17.947256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:18.682 [2024-12-07 10:35:17.947266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:18.682 [2024-12-07 10:35:17.947276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:18.682 [2024-12-07 10:35:17.947287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:18.682 [2024-12-07 10:35:17.947297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:18.682 [2024-12-07 10:35:17.947349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.682 [2024-12-07 10:35:17.947360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.682 [2024-12-07 10:35:17.947381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.682 [2024-12-07 10:35:17.947391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.682 [2024-12-07 10:35:17.947402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.682 [2024-12-07 10:35:17.947415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.682 [2024-12-07 10:35:17.947429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.682 [2024-12-07 10:35:17.947439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:23:18.682 [2024-12-07 10:35:17.947449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.682 [2024-12-07 10:35:17.986680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.682 [2024-12-07 10:35:17.986715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.682 [2024-12-07 10:35:17.986728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.234 ms 00:23:18.682 [2024-12-07 10:35:17.986754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.682 [2024-12-07 10:35:17.986871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.682 [2024-12-07 10:35:17.986884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:18.682 [2024-12-07 10:35:17.986895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:18.682 [2024-12-07 10:35:17.986905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.045701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.045868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.945 [2024-12-07 10:35:18.046024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.869 ms 00:23:18.945 [2024-12-07 10:35:18.046043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.046139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.046152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.945 [2024-12-07 10:35:18.046164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:18.945 [2024-12-07 10:35:18.046175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.046633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.046648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.945 [2024-12-07 10:35:18.046665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:23:18.945 [2024-12-07 10:35:18.046675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.046792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.046806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.945 [2024-12-07 10:35:18.046817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:18.945 [2024-12-07 10:35:18.046827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.065766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.065801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.945 [2024-12-07 10:35:18.065816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.948 ms 00:23:18.945 [2024-12-07 10:35:18.065826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.085077] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:18.945 [2024-12-07 10:35:18.085116] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:18.945 [2024-12-07 10:35:18.085132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.085143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:18.945 [2024-12-07 10:35:18.085155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.234 ms 00:23:18.945 [2024-12-07 10:35:18.085164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.115073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.115216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:18.945 [2024-12-07 10:35:18.115238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.875 ms 00:23:18.945 [2024-12-07 10:35:18.115249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.133732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.133890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:18.945 [2024-12-07 10:35:18.133911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.431 ms 00:23:18.945 [2024-12-07 10:35:18.133936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.151769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.151927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:18.945 [2024-12-07 10:35:18.151947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.751 ms 00:23:18.945 [2024-12-07 10:35:18.151957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.152821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.152857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:18.945 [2024-12-07 10:35:18.152870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:23:18.945 [2024-12-07 10:35:18.152880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.236444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.236500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:18.945 [2024-12-07 10:35:18.236517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.670 ms 00:23:18.945 [2024-12-07 10:35:18.236544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.247377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:18.945 [2024-12-07 10:35:18.263700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.263746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:18.945 [2024-12-07 10:35:18.263782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.092 ms 00:23:18.945 [2024-12-07 10:35:18.263793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.263926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.263940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:18.945 [2024-12-07 10:35:18.263951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:18.945 [2024-12-07 10:35:18.263961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.264059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.264073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:18.945 [2024-12-07 10:35:18.264087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:18.945 [2024-12-07 10:35:18.264101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.264135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.264149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:18.945 [2024-12-07 10:35:18.264160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:18.945 [2024-12-07 10:35:18.264169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.945 [2024-12-07 10:35:18.264205] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:18.945 [2024-12-07 10:35:18.264217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.945 [2024-12-07 10:35:18.264226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:18.945 [2024-12-07 10:35:18.264237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:18.945 [2024-12-07 10:35:18.264247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.203 [2024-12-07 10:35:18.299556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.203 [2024-12-07 10:35:18.299597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.203 [2024-12-07 10:35:18.299612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.339 ms 00:23:19.203 [2024-12-07 10:35:18.299622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.203 [2024-12-07 10:35:18.299733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.203 [2024-12-07 10:35:18.299746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.203 [2024-12-07 10:35:18.299758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:19.203 [2024-12-07 10:35:18.299773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.203 [2024-12-07 10:35:18.300722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.203 [2024-12-07 10:35:18.305063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.645 ms, result 0 00:23:19.203 [2024-12-07 10:35:18.305862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.203 [2024-12-07 10:35:18.324617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:20.140  [2024-12-07T10:35:20.431Z] Copying: 26/256 [MB] (26 MBps) [2024-12-07T10:35:21.369Z] Copying: 50/256 [MB] (24 MBps) [2024-12-07T10:35:22.746Z] Copying: 74/256 [MB] (24 MBps) [2024-12-07T10:35:23.684Z] Copying: 99/256 [MB] (24 MBps) [2024-12-07T10:35:24.622Z] Copying: 123/256 [MB] (24 MBps) [2024-12-07T10:35:25.624Z] Copying: 147/256 [MB] (23 MBps) [2024-12-07T10:35:26.559Z] Copying: 172/256 [MB] (24 MBps) [2024-12-07T10:35:27.495Z] Copying: 196/256 [MB] (24 MBps) [2024-12-07T10:35:28.433Z] Copying: 222/256 [MB] (25 MBps) [2024-12-07T10:35:29.003Z] Copying: 246/256 [MB] (24 MBps) [2024-12-07T10:35:29.003Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-07 10:35:28.719636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:29.650 [2024-12-07 10:35:28.734483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.734542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.650 [2024-12-07 10:35:28.734565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:29.650 [2024-12-07 10:35:28.734576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.734602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:29.650 [2024-12-07 10:35:28.738875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.738904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.650 [2024-12-07 10:35:28.738915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:23:29.650 [2024-12-07 10:35:28.738925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.739183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.739201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.650 [2024-12-07 10:35:28.739212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:23:29.650 [2024-12-07 10:35:28.739222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.742020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.742041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.650 [2024-12-07 10:35:28.742052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:23:29.650 [2024-12-07 10:35:28.742062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.747490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.747521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.650 [2024-12-07 10:35:28.747532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.404 ms 00:23:29.650 [2024-12-07 10:35:28.747542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.783365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.783404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.650 [2024-12-07 10:35:28.783418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.814 ms 00:23:29.650 [2024-12-07 10:35:28.783427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.804154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.804197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.650 [2024-12-07 10:35:28.804211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.703 ms 00:23:29.650 [2024-12-07 10:35:28.804237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.804375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.804389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.650 [2024-12-07 10:35:28.804410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:29.650 [2024-12-07 10:35:28.804421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.840264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.840310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:29.650 [2024-12-07 10:35:28.840323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.883 ms 00:23:29.650 [2024-12-07 10:35:28.840332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.875114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.875149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:29.650 [2024-12-07 10:35:28.875162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.785 ms 00:23:29.650 [2024-12-07 10:35:28.875187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.909199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.909374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:29.650 [2024-12-07 10:35:28.909395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.010 ms 00:23:29.650 [2024-12-07 10:35:28.909405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.943198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.650 [2024-12-07 10:35:28.943372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:29.650 [2024-12-07 10:35:28.943393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.767 ms 00:23:29.650 [2024-12-07 10:35:28.943403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.650 [2024-12-07 10:35:28.943456] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:29.650 [2024-12-07 10:35:28.943486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:29.650 [2024-12-07 10:35:28.943669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.943996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:29.651 [2024-12-07 10:35:28.944588] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:29.651 [2024-12-07 10:35:28.944598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:23:29.651 [2024-12-07 10:35:28.944609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:29.651 [2024-12-07 10:35:28.944619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:29.651 [2024-12-07 10:35:28.944628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:29.651 [2024-12-07 10:35:28.944639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:29.651 [2024-12-07 10:35:28.944648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:29.651 [2024-12-07 10:35:28.944662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:29.652 [2024-12-07 10:35:28.944673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:29.652 [2024-12-07 10:35:28.944681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:29.652 [2024-12-07 10:35:28.944690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:29.652 [2024-12-07 10:35:28.944700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.652 [2024-12-07 10:35:28.944710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:29.652 [2024-12-07 10:35:28.944720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:23:29.652 [2024-12-07 10:35:28.944732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.652 [2024-12-07 10:35:28.963792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.652 [2024-12-07 10:35:28.963824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:29.652 [2024-12-07 10:35:28.963835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.068 ms 00:23:29.652 [2024-12-07 10:35:28.963850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.652 [2024-12-07 10:35:28.964480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.652 [2024-12-07 10:35:28.964503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:29.652 [2024-12-07 10:35:28.964514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:23:29.652 [2024-12-07 10:35:28.964524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.911 [2024-12-07 10:35:29.017827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.911 [2024-12-07 10:35:29.018005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.911 [2024-12-07 10:35:29.018032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.911 [2024-12-07 10:35:29.018043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.911 [2024-12-07 10:35:29.018122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.911 [2024-12-07 10:35:29.018134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.911 [2024-12-07 10:35:29.018144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.911 [2024-12-07 10:35:29.018154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.911 [2024-12-07 10:35:29.018207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.911 [2024-12-07 10:35:29.018221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.911 [2024-12-07 10:35:29.018231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.911 [2024-12-07 10:35:29.018246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.911 [2024-12-07 10:35:29.018265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.911 [2024-12-07 10:35:29.018276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.911 [2024-12-07 10:35:29.018285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.911 [2024-12-07 10:35:29.018295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.137055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.137107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.912 [2024-12-07 10:35:29.137133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.137148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.912 [2024-12-07 10:35:29.232214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.912 [2024-12-07 10:35:29.232306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.912 [2024-12-07 10:35:29.232367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.912 [2024-12-07 10:35:29.232510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.912 [2024-12-07 10:35:29.232579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.912 [2024-12-07 10:35:29.232668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.912 [2024-12-07 10:35:29.232744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.912 [2024-12-07 10:35:29.232754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.912 [2024-12-07 10:35:29.232763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.912 [2024-12-07 10:35:29.232895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 499.221 ms, result 0 00:23:31.290 00:23:31.290 00:23:31.290 10:35:30 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:31.290 10:35:30 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:31.550 10:35:30 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:31.550 [2024-12-07 10:35:30.825451] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:31.550 [2024-12-07 10:35:30.825583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78750 ] 00:23:31.808 [2024-12-07 10:35:31.006627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.808 [2024-12-07 10:35:31.125486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.377 [2024-12-07 10:35:31.478311] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.377 [2024-12-07 10:35:31.478387] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.377 [2024-12-07 10:35:31.639728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.377 [2024-12-07 10:35:31.639776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:32.377 [2024-12-07 10:35:31.639792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:32.377 [2024-12-07 10:35:31.639802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-12-07 10:35:31.642993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.377 [2024-12-07 10:35:31.643031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.377 [2024-12-07 10:35:31.643060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:23:32.377 [2024-12-07 10:35:31.643071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-12-07 10:35:31.643166] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:32.378 [2024-12-07 10:35:31.644098] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:32.378 [2024-12-07 10:35:31.644134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.644146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.378 [2024-12-07 10:35:31.644157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:23:32.378 [2024-12-07 10:35:31.644168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.645669] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:32.378 [2024-12-07 10:35:31.664622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.664660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:32.378 [2024-12-07 10:35:31.664673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.984 ms 00:23:32.378 [2024-12-07 10:35:31.664683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.664816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.664834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:32.378 [2024-12-07 10:35:31.664845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:32.378 [2024-12-07 10:35:31.664855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.671809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.671836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.378 [2024-12-07 10:35:31.671847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:23:32.378 [2024-12-07 10:35:31.671856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.671968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.671982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.378 [2024-12-07 10:35:31.672006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:32.378 [2024-12-07 10:35:31.672017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.672049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.672060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:32.378 [2024-12-07 10:35:31.672071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:32.378 [2024-12-07 10:35:31.672080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.672102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:32.378 [2024-12-07 10:35:31.676947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.676984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.378 [2024-12-07 10:35:31.676996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.858 ms 00:23:32.378 [2024-12-07 10:35:31.677023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.677092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.677105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:32.378 [2024-12-07 10:35:31.677116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:32.378 [2024-12-07 10:35:31.677127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.677152] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:32.378 [2024-12-07 10:35:31.677185] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:32.378 [2024-12-07 10:35:31.677258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:32.378 [2024-12-07 10:35:31.677281] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:32.378 [2024-12-07 10:35:31.677370] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:32.378 [2024-12-07 10:35:31.677384] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:32.378 [2024-12-07 10:35:31.677397] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:32.378 [2024-12-07 10:35:31.677414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:32.378 [2024-12-07 10:35:31.677426] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:32.378 [2024-12-07 10:35:31.677438] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:32.378 [2024-12-07 10:35:31.677448] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:32.378 [2024-12-07 10:35:31.677458] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:32.378 [2024-12-07 10:35:31.677468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:32.378 [2024-12-07 10:35:31.677479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.677489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:32.378 [2024-12-07 10:35:31.677499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:23:32.378 [2024-12-07 10:35:31.677509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.677588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.378 [2024-12-07 10:35:31.677605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:32.378 [2024-12-07 10:35:31.677615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:32.378 [2024-12-07 10:35:31.677625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.378 [2024-12-07 10:35:31.677715] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:32.378 [2024-12-07 10:35:31.677728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:32.378 [2024-12-07 10:35:31.677738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.378 [2024-12-07 10:35:31.677749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.378 [2024-12-07 10:35:31.677759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:32.378 [2024-12-07 10:35:31.677768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:32.378 [2024-12-07 10:35:31.677778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:32.378 [2024-12-07 10:35:31.677787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:32.378 [2024-12-07 10:35:31.677797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:32.378 [2024-12-07 10:35:31.677806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.378 [2024-12-07 10:35:31.677816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:32.378 [2024-12-07 10:35:31.677835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:32.378 [2024-12-07 10:35:31.677845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.378 [2024-12-07 10:35:31.677855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:32.378 [2024-12-07 10:35:31.677864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:32.378 [2024-12-07 10:35:31.677874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.378 [2024-12-07 10:35:31.677883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:32.378 [2024-12-07 10:35:31.677892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:32.378 [2024-12-07 10:35:31.677901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.378 [2024-12-07 10:35:31.677911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:32.378 [2024-12-07 10:35:31.677920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:32.379 [2024-12-07 10:35:31.677929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.379 [2024-12-07 10:35:31.677939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:32.379 [2024-12-07 10:35:31.677948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:32.379 [2024-12-07 10:35:31.677957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.379 [2024-12-07 10:35:31.677967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:32.379 [2024-12-07 10:35:31.677976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:32.379 [2024-12-07 10:35:31.677985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.379 [2024-12-07 10:35:31.678007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:32.379 [2024-12-07 10:35:31.678017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:32.379 [2024-12-07 10:35:31.678026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.379 [2024-12-07 10:35:31.678035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:32.379 [2024-12-07 10:35:31.678044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:32.379 [2024-12-07 10:35:31.678053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.379 [2024-12-07 10:35:31.678063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:32.379 [2024-12-07 10:35:31.678072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:32.379 [2024-12-07 10:35:31.678081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.379 [2024-12-07 10:35:31.678090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:32.379 [2024-12-07 10:35:31.678099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:32.379 [2024-12-07 10:35:31.678108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.379 [2024-12-07 10:35:31.678117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:32.379 [2024-12-07 10:35:31.678126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:32.379 [2024-12-07 10:35:31.678136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.379 [2024-12-07 10:35:31.678145] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:32.379 [2024-12-07 10:35:31.678155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:32.379 [2024-12-07 10:35:31.678169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.379 [2024-12-07 10:35:31.678179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.379 [2024-12-07 10:35:31.678189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:32.379 [2024-12-07 10:35:31.678198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:32.379 [2024-12-07 10:35:31.678208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:32.379 [2024-12-07 10:35:31.678217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:32.379 [2024-12-07 10:35:31.678226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:32.379 [2024-12-07 10:35:31.678237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:32.379 [2024-12-07 10:35:31.678248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:32.379 [2024-12-07 10:35:31.678266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:32.379 [2024-12-07 10:35:31.678289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:32.379 [2024-12-07 10:35:31.678299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:32.379 [2024-12-07 10:35:31.678309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:32.379 [2024-12-07 10:35:31.678320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:32.379 [2024-12-07 10:35:31.678330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:32.379 [2024-12-07 10:35:31.678341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:32.379 [2024-12-07 10:35:31.678351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:32.379 [2024-12-07 10:35:31.678362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:32.379 [2024-12-07 10:35:31.678372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:32.379 [2024-12-07 10:35:31.678422] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:32.379 [2024-12-07 10:35:31.678433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:32.379 [2024-12-07 10:35:31.678454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:32.379 [2024-12-07 10:35:31.678464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:32.379 [2024-12-07 10:35:31.678474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:32.379 [2024-12-07 10:35:31.678485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.379 [2024-12-07 10:35:31.678499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:32.379 [2024-12-07 10:35:31.678509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:23:32.379 [2024-12-07 10:35:31.678519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.379 [2024-12-07 10:35:31.714795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.379 [2024-12-07 10:35:31.714830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.379 [2024-12-07 10:35:31.714843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.269 ms 00:23:32.379 [2024-12-07 10:35:31.714855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.379 [2024-12-07 10:35:31.714972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.379 [2024-12-07 10:35:31.714996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:32.379 [2024-12-07 10:35:31.715007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:32.379 [2024-12-07 10:35:31.715017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.639 [2024-12-07 10:35:31.777674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.639 [2024-12-07 10:35:31.777711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.639 [2024-12-07 10:35:31.777728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.734 ms 00:23:32.639 [2024-12-07 10:35:31.777739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.639 [2024-12-07 10:35:31.777842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.639 [2024-12-07 10:35:31.777855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.639 [2024-12-07 10:35:31.777873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:32.639 [2024-12-07 10:35:31.777883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.639 [2024-12-07 10:35:31.778338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.639 [2024-12-07 10:35:31.778358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.639 [2024-12-07 10:35:31.778376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:23:32.639 [2024-12-07 10:35:31.778387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.639 [2024-12-07 10:35:31.778511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.639 [2024-12-07 10:35:31.778525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.639 [2024-12-07 10:35:31.778536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:32.639 [2024-12-07 10:35:31.778546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.639 [2024-12-07 10:35:31.797246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.639 [2024-12-07 10:35:31.797278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.640 [2024-12-07 10:35:31.797291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.698 ms 00:23:32.640 [2024-12-07 10:35:31.797300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.815628] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:32.640 [2024-12-07 10:35:31.815666] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:32.640 [2024-12-07 10:35:31.815680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.815691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:32.640 [2024-12-07 10:35:31.815702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.295 ms 00:23:32.640 [2024-12-07 10:35:31.815727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.843316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.843354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:32.640 [2024-12-07 10:35:31.843383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.552 ms 00:23:32.640 [2024-12-07 10:35:31.843394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.861143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.861177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:32.640 [2024-12-07 10:35:31.861189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.697 ms 00:23:32.640 [2024-12-07 10:35:31.861199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.878216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.878250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:32.640 [2024-12-07 10:35:31.878261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.944 ms 00:23:32.640 [2024-12-07 10:35:31.878270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.879038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.879068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.640 [2024-12-07 10:35:31.879080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:23:32.640 [2024-12-07 10:35:31.879090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.961253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.961315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:32.640 [2024-12-07 10:35:31.961331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.266 ms 00:23:32.640 [2024-12-07 10:35:31.961358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.971639] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:32.640 [2024-12-07 10:35:31.987254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.987295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.640 [2024-12-07 10:35:31.987326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.843 ms 00:23:32.640 [2024-12-07 10:35:31.987343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.987452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.987467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.640 [2024-12-07 10:35:31.987478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.640 [2024-12-07 10:35:31.987489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.987545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.987557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.640 [2024-12-07 10:35:31.987568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:32.640 [2024-12-07 10:35:31.987582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.987615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.987629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.640 [2024-12-07 10:35:31.987639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:32.640 [2024-12-07 10:35:31.987649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.640 [2024-12-07 10:35:31.987688] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.640 [2024-12-07 10:35:31.987701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.640 [2024-12-07 10:35:31.987711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.640 [2024-12-07 10:35:31.987721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:32.640 [2024-12-07 10:35:31.987731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.899 [2024-12-07 10:35:32.022016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.899 [2024-12-07 10:35:32.022055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.899 [2024-12-07 10:35:32.022085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.319 ms 00:23:32.899 [2024-12-07 10:35:32.022096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.899 [2024-12-07 10:35:32.022205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.899 [2024-12-07 10:35:32.022219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.899 [2024-12-07 10:35:32.022229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:32.899 [2024-12-07 10:35:32.022239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.899 [2024-12-07 10:35:32.023198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.899 [2024-12-07 10:35:32.027491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.786 ms, result 0 00:23:32.899 [2024-12-07 10:35:32.028363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.899 [2024-12-07 10:35:32.045527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.899  [2024-12-07T10:35:32.252Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-12-07 10:35:32.221574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.899 [2024-12-07 10:35:32.236121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.899 [2024-12-07 10:35:32.236169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.899 [2024-12-07 10:35:32.236186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:32.899 [2024-12-07 10:35:32.236212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.899 [2024-12-07 10:35:32.236235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:32.899 [2024-12-07 10:35:32.240321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.899 [2024-12-07 10:35:32.240348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.899 [2024-12-07 10:35:32.240361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:23:32.899 [2024-12-07 10:35:32.240386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.899 [2024-12-07 10:35:32.242390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.899 [2024-12-07 10:35:32.242425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.899 [2024-12-07 10:35:32.242437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.984 ms 00:23:32.899 [2024-12-07 10:35:32.242447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.899 [2024-12-07 10:35:32.245736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.899 [2024-12-07 10:35:32.245767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:32.899 [2024-12-07 10:35:32.245779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:23:32.899 [2024-12-07 10:35:32.245790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.251441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.251472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:33.159 [2024-12-07 10:35:32.251485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:23:33.159 [2024-12-07 10:35:32.251496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.287197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.287234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:33.159 [2024-12-07 10:35:32.287262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.695 ms 00:23:33.159 [2024-12-07 10:35:32.287272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.307796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.307840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:33.159 [2024-12-07 10:35:32.307854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.484 ms 00:23:33.159 [2024-12-07 10:35:32.307866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.308012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.308026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:33.159 [2024-12-07 10:35:32.308049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:23:33.159 [2024-12-07 10:35:32.308059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.343807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.343844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:33.159 [2024-12-07 10:35:32.343858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.788 ms 00:23:33.159 [2024-12-07 10:35:32.343868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.379408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.379441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:33.159 [2024-12-07 10:35:32.379453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.542 ms 00:23:33.159 [2024-12-07 10:35:32.379462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.412870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.412904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.159 [2024-12-07 10:35:32.412916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.393 ms 00:23:33.159 [2024-12-07 10:35:32.412925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.446516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.159 [2024-12-07 10:35:32.446555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.159 [2024-12-07 10:35:32.446567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.544 ms 00:23:33.159 [2024-12-07 10:35:32.446576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.159 [2024-12-07 10:35:32.446643] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.159 [2024-12-07 10:35:32.446658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.159 [2024-12-07 10:35:32.446838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.446987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.160 [2024-12-07 10:35:32.447749] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.160 [2024-12-07 10:35:32.447759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:23:33.160 [2024-12-07 10:35:32.447770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.160 [2024-12-07 10:35:32.447780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.160 [2024-12-07 10:35:32.447789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.160 [2024-12-07 10:35:32.447800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.160 [2024-12-07 10:35:32.447809] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.160 [2024-12-07 10:35:32.447819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.160 [2024-12-07 10:35:32.447833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.160 [2024-12-07 10:35:32.447842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.161 [2024-12-07 10:35:32.447851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.161 [2024-12-07 10:35:32.447861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.161 [2024-12-07 10:35:32.447871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.161 [2024-12-07 10:35:32.447882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:23:33.161 [2024-12-07 10:35:32.447893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.161 [2024-12-07 10:35:32.467187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.161 [2024-12-07 10:35:32.467218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.161 [2024-12-07 10:35:32.467246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.306 ms 00:23:33.161 [2024-12-07 10:35:32.467255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.161 [2024-12-07 10:35:32.467845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.161 [2024-12-07 10:35:32.467867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.161 [2024-12-07 10:35:32.467878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:23:33.161 [2024-12-07 10:35:32.467888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.420 [2024-12-07 10:35:32.520210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.420 [2024-12-07 10:35:32.520244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.420 [2024-12-07 10:35:32.520256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.420 [2024-12-07 10:35:32.520270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.420 [2024-12-07 10:35:32.520368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.420 [2024-12-07 10:35:32.520379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.420 [2024-12-07 10:35:32.520390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.420 [2024-12-07 10:35:32.520400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.420 [2024-12-07 10:35:32.520446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.420 [2024-12-07 10:35:32.520459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.420 [2024-12-07 10:35:32.520469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.420 [2024-12-07 10:35:32.520479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.420 [2024-12-07 10:35:32.520501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.420 [2024-12-07 10:35:32.520512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.420 [2024-12-07 10:35:32.520521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.420 [2024-12-07 10:35:32.520531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.420 [2024-12-07 10:35:32.636815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.420 [2024-12-07 10:35:32.636863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.421 [2024-12-07 10:35:32.636876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.636907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.730921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.730967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.421 [2024-12-07 10:35:32.730993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.731091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.421 [2024-12-07 10:35:32.731102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.731156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.421 [2024-12-07 10:35:32.731167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.731303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.421 [2024-12-07 10:35:32.731314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.731389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.421 [2024-12-07 10:35:32.731404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.731466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.421 [2024-12-07 10:35:32.731476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.421 [2024-12-07 10:35:32.731544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.421 [2024-12-07 10:35:32.731554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.421 [2024-12-07 10:35:32.731564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.421 [2024-12-07 10:35:32.731716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.373 ms, result 0 00:23:34.800 00:23:34.800 00:23:34.800 10:35:33 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78781 00:23:34.800 10:35:33 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:34.800 10:35:33 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78781 00:23:34.800 10:35:33 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78781 ']' 00:23:34.800 10:35:33 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.800 10:35:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.800 10:35:33 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.800 10:35:33 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.800 10:35:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:34.800 [2024-12-07 10:35:33.882435] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:34.800 [2024-12-07 10:35:33.882581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78781 ] 00:23:34.800 [2024-12-07 10:35:34.059590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.059 [2024-12-07 10:35:34.165958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.994 10:35:34 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.994 10:35:34 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:35.994 10:35:34 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:35.994 [2024-12-07 10:35:35.203453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.994 [2024-12-07 10:35:35.203527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.253 [2024-12-07 10:35:35.387546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.253 [2024-12-07 10:35:35.387603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.253 [2024-12-07 10:35:35.387638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:36.253 [2024-12-07 10:35:35.387649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.253 [2024-12-07 10:35:35.391108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.253 [2024-12-07 10:35:35.391144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.253 [2024-12-07 10:35:35.391159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.443 ms 00:23:36.253 [2024-12-07 10:35:35.391169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.253 [2024-12-07 10:35:35.391293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.253 [2024-12-07 10:35:35.392261] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.253 [2024-12-07 10:35:35.392297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.253 [2024-12-07 10:35:35.392309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.253 [2024-12-07 10:35:35.392322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:23:36.253 [2024-12-07 10:35:35.392332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.253 [2024-12-07 10:35:35.393800] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.253 [2024-12-07 10:35:35.412173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.253 [2024-12-07 10:35:35.412217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.253 [2024-12-07 10:35:35.412232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.406 ms 00:23:36.253 [2024-12-07 10:35:35.412262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.253 [2024-12-07 10:35:35.412364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.253 [2024-12-07 10:35:35.412384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.254 [2024-12-07 10:35:35.412395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:36.254 [2024-12-07 10:35:35.412410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.419279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.419334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.254 [2024-12-07 10:35:35.419347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.825 ms 00:23:36.254 [2024-12-07 10:35:35.419362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.419494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.419514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.254 [2024-12-07 10:35:35.419526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:36.254 [2024-12-07 10:35:35.419559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.419586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.419601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.254 [2024-12-07 10:35:35.419612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:36.254 [2024-12-07 10:35:35.419625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.419650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:36.254 [2024-12-07 10:35:35.424437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.424467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.254 [2024-12-07 10:35:35.424484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.795 ms 00:23:36.254 [2024-12-07 10:35:35.424494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.424593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.424606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.254 [2024-12-07 10:35:35.424622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:36.254 [2024-12-07 10:35:35.424638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.424665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.254 [2024-12-07 10:35:35.424692] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.254 [2024-12-07 10:35:35.424742] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.254 [2024-12-07 10:35:35.424763] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.254 [2024-12-07 10:35:35.424858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.254 [2024-12-07 10:35:35.424871] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.254 [2024-12-07 10:35:35.424894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.254 [2024-12-07 10:35:35.424909] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.254 [2024-12-07 10:35:35.424926] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.254 [2024-12-07 10:35:35.424937] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:36.254 [2024-12-07 10:35:35.424952] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.254 [2024-12-07 10:35:35.424962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.254 [2024-12-07 10:35:35.424994] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.254 [2024-12-07 10:35:35.425006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.425021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.254 [2024-12-07 10:35:35.425032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:23:36.254 [2024-12-07 10:35:35.425046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.425125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.425141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.254 [2024-12-07 10:35:35.425151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:36.254 [2024-12-07 10:35:35.425164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.425253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.254 [2024-12-07 10:35:35.425267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.254 [2024-12-07 10:35:35.425278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.254 [2024-12-07 10:35:35.425314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.254 [2024-12-07 10:35:35.425348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.254 [2024-12-07 10:35:35.425368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.254 [2024-12-07 10:35:35.425380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:36.254 [2024-12-07 10:35:35.425390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.254 [2024-12-07 10:35:35.425402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.254 [2024-12-07 10:35:35.425412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:36.254 [2024-12-07 10:35:35.425423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.254 [2024-12-07 10:35:35.425444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.254 [2024-12-07 10:35:35.425484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.254 [2024-12-07 10:35:35.425518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.254 [2024-12-07 10:35:35.425548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.254 [2024-12-07 10:35:35.425582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.254 [2024-12-07 10:35:35.425612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.254 [2024-12-07 10:35:35.425633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.254 [2024-12-07 10:35:35.425644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:36.254 [2024-12-07 10:35:35.425653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.254 [2024-12-07 10:35:35.425665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.254 [2024-12-07 10:35:35.425674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:36.254 [2024-12-07 10:35:35.425687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.254 [2024-12-07 10:35:35.425708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:36.254 [2024-12-07 10:35:35.425717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425729] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.254 [2024-12-07 10:35:35.425741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.254 [2024-12-07 10:35:35.425753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.254 [2024-12-07 10:35:35.425775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.254 [2024-12-07 10:35:35.425785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.254 [2024-12-07 10:35:35.425796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.254 [2024-12-07 10:35:35.425806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.254 [2024-12-07 10:35:35.425817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.254 [2024-12-07 10:35:35.425826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.254 [2024-12-07 10:35:35.425839] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.254 [2024-12-07 10:35:35.425851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.425869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:36.254 [2024-12-07 10:35:35.425880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:36.254 [2024-12-07 10:35:35.425892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:36.254 [2024-12-07 10:35:35.425903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:36.254 [2024-12-07 10:35:35.425916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:36.254 [2024-12-07 10:35:35.425925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:36.254 [2024-12-07 10:35:35.425938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:36.254 [2024-12-07 10:35:35.425949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:36.254 [2024-12-07 10:35:35.425961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:36.254 [2024-12-07 10:35:35.425972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.426000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.426010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.426022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.426033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:36.254 [2024-12-07 10:35:35.426046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.254 [2024-12-07 10:35:35.426057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.426072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.254 [2024-12-07 10:35:35.426084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.254 [2024-12-07 10:35:35.426096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.254 [2024-12-07 10:35:35.426108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.254 [2024-12-07 10:35:35.426121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.426132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.254 [2024-12-07 10:35:35.426144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:23:36.254 [2024-12-07 10:35:35.426158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.465968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.466012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.254 [2024-12-07 10:35:35.466046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.811 ms 00:23:36.254 [2024-12-07 10:35:35.466062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.466188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.466201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.254 [2024-12-07 10:35:35.466216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:36.254 [2024-12-07 10:35:35.466226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.509497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.509540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.254 [2024-12-07 10:35:35.509558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.311 ms 00:23:36.254 [2024-12-07 10:35:35.509569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.509660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.509672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.254 [2024-12-07 10:35:35.509688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:36.254 [2024-12-07 10:35:35.509698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.510184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.510211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.254 [2024-12-07 10:35:35.510228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:23:36.254 [2024-12-07 10:35:35.510238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.254 [2024-12-07 10:35:35.510363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.254 [2024-12-07 10:35:35.510388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.254 [2024-12-07 10:35:35.510405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:36.255 [2024-12-07 10:35:35.510415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.255 [2024-12-07 10:35:35.532169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.255 [2024-12-07 10:35:35.532205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.255 [2024-12-07 10:35:35.532220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.762 ms 00:23:36.255 [2024-12-07 10:35:35.532231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.255 [2024-12-07 10:35:35.578246] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.255 [2024-12-07 10:35:35.578287] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.255 [2024-12-07 10:35:35.578307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.255 [2024-12-07 10:35:35.578319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.255 [2024-12-07 10:35:35.578334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.043 ms 00:23:36.255 [2024-12-07 10:35:35.578354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.607310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.607372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.513 [2024-12-07 10:35:35.607407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.911 ms 00:23:36.513 [2024-12-07 10:35:35.607418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.625145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.625184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.513 [2024-12-07 10:35:35.625217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.669 ms 00:23:36.513 [2024-12-07 10:35:35.625228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.642285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.642320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.513 [2024-12-07 10:35:35.642335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.006 ms 00:23:36.513 [2024-12-07 10:35:35.642345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.643151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.643185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.513 [2024-12-07 10:35:35.643201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:23:36.513 [2024-12-07 10:35:35.643212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.725078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.725152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.513 [2024-12-07 10:35:35.725172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.949 ms 00:23:36.513 [2024-12-07 10:35:35.725184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.735884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:36.513 [2024-12-07 10:35:35.752335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.752389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.513 [2024-12-07 10:35:35.752409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.073 ms 00:23:36.513 [2024-12-07 10:35:35.752422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.752519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.752535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.513 [2024-12-07 10:35:35.752547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.513 [2024-12-07 10:35:35.752560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.752614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.752629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.513 [2024-12-07 10:35:35.752639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:36.513 [2024-12-07 10:35:35.752655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.752680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.752694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.513 [2024-12-07 10:35:35.752705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.513 [2024-12-07 10:35:35.752717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.752754] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.513 [2024-12-07 10:35:35.752775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.752789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.513 [2024-12-07 10:35:35.752802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:36.513 [2024-12-07 10:35:35.752813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.789134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.789174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.513 [2024-12-07 10:35:35.789206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.347 ms 00:23:36.513 [2024-12-07 10:35:35.789217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.789329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.513 [2024-12-07 10:35:35.789344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.513 [2024-12-07 10:35:35.789358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:36.513 [2024-12-07 10:35:35.789371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.513 [2024-12-07 10:35:35.790393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.513 [2024-12-07 10:35:35.794641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.217 ms, result 0 00:23:36.513 [2024-12-07 10:35:35.795654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.513 Some configs were skipped because the RPC state that can call them passed over. 00:23:36.513 10:35:35 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:36.771 [2024-12-07 10:35:36.031099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.771 [2024-12-07 10:35:36.031158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:36.771 [2024-12-07 10:35:36.031173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:23:36.771 [2024-12-07 10:35:36.031188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.771 [2024-12-07 10:35:36.031225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.927 ms, result 0 00:23:36.771 true 00:23:36.771 10:35:36 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:37.030 [2024-12-07 10:35:36.206498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.030 [2024-12-07 10:35:36.206545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:37.030 [2024-12-07 10:35:36.206571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:23:37.030 [2024-12-07 10:35:36.206581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.030 [2024-12-07 10:35:36.206622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.442 ms, result 0 00:23:37.030 true 00:23:37.030 10:35:36 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78781 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78781 ']' 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78781 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78781 00:23:37.030 killing process with pid 78781 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78781' 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78781 00:23:37.030 10:35:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78781 00:23:38.409 [2024-12-07 10:35:37.342415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.342483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.409 [2024-12-07 10:35:37.342498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.409 [2024-12-07 10:35:37.342510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.342536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:38.409 [2024-12-07 10:35:37.346682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.346714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.409 [2024-12-07 10:35:37.346747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:23:38.409 [2024-12-07 10:35:37.346757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.347026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.347041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.409 [2024-12-07 10:35:37.347054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:23:38.409 [2024-12-07 10:35:37.347082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.350515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.350559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.409 [2024-12-07 10:35:37.350578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:23:38.409 [2024-12-07 10:35:37.350589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.355925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.355960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.409 [2024-12-07 10:35:37.356003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.303 ms 00:23:38.409 [2024-12-07 10:35:37.356014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.370366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.370409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.409 [2024-12-07 10:35:37.370427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.314 ms 00:23:38.409 [2024-12-07 10:35:37.370436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.381204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.381243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.409 [2024-12-07 10:35:37.381259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.727 ms 00:23:38.409 [2024-12-07 10:35:37.381269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.381392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.381406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:38.409 [2024-12-07 10:35:37.381418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:38.409 [2024-12-07 10:35:37.381427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.396207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.396240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:38.409 [2024-12-07 10:35:37.396256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.780 ms 00:23:38.409 [2024-12-07 10:35:37.396265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.410435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.410467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:38.409 [2024-12-07 10:35:37.410488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.140 ms 00:23:38.409 [2024-12-07 10:35:37.410497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.424320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.424354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:38.409 [2024-12-07 10:35:37.424369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.793 ms 00:23:38.409 [2024-12-07 10:35:37.424378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.438158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-07 10:35:37.438189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:38.409 [2024-12-07 10:35:37.438204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.727 ms 00:23:38.409 [2024-12-07 10:35:37.438213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.409 [2024-12-07 10:35:37.438262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:38.409 [2024-12-07 10:35:37.438279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.409 [2024-12-07 10:35:37.438424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.438982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:38.410 [2024-12-07 10:35:37.439316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:38.411 [2024-12-07 10:35:37.439577] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:38.411 [2024-12-07 10:35:37.439597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:23:38.411 [2024-12-07 10:35:37.439611] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:38.411 [2024-12-07 10:35:37.439623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:38.411 [2024-12-07 10:35:37.439633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:38.411 [2024-12-07 10:35:37.439646] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:38.411 [2024-12-07 10:35:37.439657] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:38.411 [2024-12-07 10:35:37.439669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:38.411 [2024-12-07 10:35:37.439679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:38.411 [2024-12-07 10:35:37.439690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:38.411 [2024-12-07 10:35:37.439699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:38.411 [2024-12-07 10:35:37.439711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-07 10:35:37.439721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:38.411 [2024-12-07 10:35:37.439735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:23:38.411 [2024-12-07 10:35:37.439745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.458399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-07 10:35:37.458433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:38.411 [2024-12-07 10:35:37.458451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.640 ms 00:23:38.411 [2024-12-07 10:35:37.458461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.459032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-07 10:35:37.459054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:38.411 [2024-12-07 10:35:37.459072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:23:38.411 [2024-12-07 10:35:37.459082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.523015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.523053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.411 [2024-12-07 10:35:37.523084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.523095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.523179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.523193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.411 [2024-12-07 10:35:37.523209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.523220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.523272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.523285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.411 [2024-12-07 10:35:37.523301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.523311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.523332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.523343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.411 [2024-12-07 10:35:37.523357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.523369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.640752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.640803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.411 [2024-12-07 10:35:37.640821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.640832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.734491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.734537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.411 [2024-12-07 10:35:37.734559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.734573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.734662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.734676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.411 [2024-12-07 10:35:37.734693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.734703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.734735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.734746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.411 [2024-12-07 10:35:37.734759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.734769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.734893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.734907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.411 [2024-12-07 10:35:37.734921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.734932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-07 10:35:37.734988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.411 [2024-12-07 10:35:37.735001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.411 [2024-12-07 10:35:37.735034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.411 [2024-12-07 10:35:37.735046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-07 10:35:37.735091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.412 [2024-12-07 10:35:37.735103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.412 [2024-12-07 10:35:37.735119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.412 [2024-12-07 10:35:37.735131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-07 10:35:37.735177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.412 [2024-12-07 10:35:37.735190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.412 [2024-12-07 10:35:37.735204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.412 [2024-12-07 10:35:37.735214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-07 10:35:37.735358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.554 ms, result 0 00:23:39.789 10:35:38 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.789 [2024-12-07 10:35:38.828766] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:39.789 [2024-12-07 10:35:38.828936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78845 ] 00:23:39.789 [2024-12-07 10:35:39.016299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.789 [2024-12-07 10:35:39.127070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.359 [2024-12-07 10:35:39.481534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.359 [2024-12-07 10:35:39.481611] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.359 [2024-12-07 10:35:39.642351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.642398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.359 [2024-12-07 10:35:39.642414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.359 [2024-12-07 10:35:39.642424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.645602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.645639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.359 [2024-12-07 10:35:39.645666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:23:40.359 [2024-12-07 10:35:39.645676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.645823] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.359 [2024-12-07 10:35:39.646854] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.359 [2024-12-07 10:35:39.646889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.646900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.359 [2024-12-07 10:35:39.646912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:23:40.359 [2024-12-07 10:35:39.646922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.648524] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.359 [2024-12-07 10:35:39.667803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.667843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.359 [2024-12-07 10:35:39.667856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.310 ms 00:23:40.359 [2024-12-07 10:35:39.667867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.668001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.668017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.359 [2024-12-07 10:35:39.668029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:40.359 [2024-12-07 10:35:39.668039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.674949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.674984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.359 [2024-12-07 10:35:39.674996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:23:40.359 [2024-12-07 10:35:39.675006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.675104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.675119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.359 [2024-12-07 10:35:39.675130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:40.359 [2024-12-07 10:35:39.675141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.675173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.675185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.359 [2024-12-07 10:35:39.675195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:40.359 [2024-12-07 10:35:39.675205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.675228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:40.359 [2024-12-07 10:35:39.679909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.679939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.359 [2024-12-07 10:35:39.679950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:23:40.359 [2024-12-07 10:35:39.679960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.680051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.359 [2024-12-07 10:35:39.680065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.359 [2024-12-07 10:35:39.680076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:40.359 [2024-12-07 10:35:39.680087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.359 [2024-12-07 10:35:39.680113] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.359 [2024-12-07 10:35:39.680137] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.359 [2024-12-07 10:35:39.680171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.359 [2024-12-07 10:35:39.680188] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.359 [2024-12-07 10:35:39.680293] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.359 [2024-12-07 10:35:39.680306] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.359 [2024-12-07 10:35:39.680319] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.360 [2024-12-07 10:35:39.680335] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680348] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680359] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:40.360 [2024-12-07 10:35:39.680369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.360 [2024-12-07 10:35:39.680380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.360 [2024-12-07 10:35:39.680389] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.360 [2024-12-07 10:35:39.680400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.360 [2024-12-07 10:35:39.680410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.360 [2024-12-07 10:35:39.680421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:23:40.360 [2024-12-07 10:35:39.680430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.360 [2024-12-07 10:35:39.680506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.360 [2024-12-07 10:35:39.680520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.360 [2024-12-07 10:35:39.680530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:40.360 [2024-12-07 10:35:39.680540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.360 [2024-12-07 10:35:39.680631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.360 [2024-12-07 10:35:39.680651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.360 [2024-12-07 10:35:39.680662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.360 [2024-12-07 10:35:39.680694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.360 [2024-12-07 10:35:39.680723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.360 [2024-12-07 10:35:39.680742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.360 [2024-12-07 10:35:39.680764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:40.360 [2024-12-07 10:35:39.680774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.360 [2024-12-07 10:35:39.680783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.360 [2024-12-07 10:35:39.680792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:40.360 [2024-12-07 10:35:39.680802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.360 [2024-12-07 10:35:39.680821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.360 [2024-12-07 10:35:39.680849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.360 [2024-12-07 10:35:39.680876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.360 [2024-12-07 10:35:39.680904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.360 [2024-12-07 10:35:39.680930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.360 [2024-12-07 10:35:39.680950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.360 [2024-12-07 10:35:39.680959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:40.360 [2024-12-07 10:35:39.680967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.360 [2024-12-07 10:35:39.681000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.360 [2024-12-07 10:35:39.681011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:40.360 [2024-12-07 10:35:39.681020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.360 [2024-12-07 10:35:39.681030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.360 [2024-12-07 10:35:39.681039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:40.360 [2024-12-07 10:35:39.681049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.681057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.360 [2024-12-07 10:35:39.681067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:40.360 [2024-12-07 10:35:39.681076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.681086] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.360 [2024-12-07 10:35:39.681096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.360 [2024-12-07 10:35:39.681110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.360 [2024-12-07 10:35:39.681120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.360 [2024-12-07 10:35:39.681130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.360 [2024-12-07 10:35:39.681140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.360 [2024-12-07 10:35:39.681150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.360 [2024-12-07 10:35:39.681159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.360 [2024-12-07 10:35:39.681168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.360 [2024-12-07 10:35:39.681178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.360 [2024-12-07 10:35:39.681188] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.360 [2024-12-07 10:35:39.681201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:40.360 [2024-12-07 10:35:39.681224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:40.360 [2024-12-07 10:35:39.681234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:40.360 [2024-12-07 10:35:39.681244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:40.360 [2024-12-07 10:35:39.681254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:40.360 [2024-12-07 10:35:39.681265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:40.360 [2024-12-07 10:35:39.681275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:40.360 [2024-12-07 10:35:39.681285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:40.360 [2024-12-07 10:35:39.681296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:40.360 [2024-12-07 10:35:39.681307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:40.360 [2024-12-07 10:35:39.681359] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.360 [2024-12-07 10:35:39.681371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.360 [2024-12-07 10:35:39.681392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.360 [2024-12-07 10:35:39.681403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.360 [2024-12-07 10:35:39.681413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.360 [2024-12-07 10:35:39.681424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.360 [2024-12-07 10:35:39.681438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.360 [2024-12-07 10:35:39.681449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:23:40.360 [2024-12-07 10:35:39.681460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.719850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.719888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.620 [2024-12-07 10:35:39.719901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.385 ms 00:23:40.620 [2024-12-07 10:35:39.719911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.720064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.720078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.620 [2024-12-07 10:35:39.720089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:40.620 [2024-12-07 10:35:39.720100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.791119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.791161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.620 [2024-12-07 10:35:39.791179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.111 ms 00:23:40.620 [2024-12-07 10:35:39.791189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.791295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.791308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.620 [2024-12-07 10:35:39.791320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:40.620 [2024-12-07 10:35:39.791330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.791813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.791835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.620 [2024-12-07 10:35:39.791853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:23:40.620 [2024-12-07 10:35:39.791864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.792005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.792023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.620 [2024-12-07 10:35:39.792035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:40.620 [2024-12-07 10:35:39.792046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.810156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.810191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.620 [2024-12-07 10:35:39.810220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.116 ms 00:23:40.620 [2024-12-07 10:35:39.810230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.828396] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:40.620 [2024-12-07 10:35:39.828434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.620 [2024-12-07 10:35:39.828448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.828458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.620 [2024-12-07 10:35:39.828486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.141 ms 00:23:40.620 [2024-12-07 10:35:39.828496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.856524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.856565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.620 [2024-12-07 10:35:39.856578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.992 ms 00:23:40.620 [2024-12-07 10:35:39.856588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.873534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.873571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.620 [2024-12-07 10:35:39.873584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:23:40.620 [2024-12-07 10:35:39.873593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.890727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.890762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.620 [2024-12-07 10:35:39.890774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.071 ms 00:23:40.620 [2024-12-07 10:35:39.890783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.620 [2024-12-07 10:35:39.891516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.620 [2024-12-07 10:35:39.891548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.620 [2024-12-07 10:35:39.891560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:23:40.620 [2024-12-07 10:35:39.891570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.878 [2024-12-07 10:35:39.973032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.878 [2024-12-07 10:35:39.973091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.878 [2024-12-07 10:35:39.973109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.544 ms 00:23:40.878 [2024-12-07 10:35:39.973120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.878 [2024-12-07 10:35:39.984149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:40.878 [2024-12-07 10:35:40.000272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.878 [2024-12-07 10:35:40.000319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.878 [2024-12-07 10:35:40.000336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.096 ms 00:23:40.878 [2024-12-07 10:35:40.000352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.878 [2024-12-07 10:35:40.000478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.878 [2024-12-07 10:35:40.000492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.878 [2024-12-07 10:35:40.000504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.878 [2024-12-07 10:35:40.000514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.878 [2024-12-07 10:35:40.000570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.878 [2024-12-07 10:35:40.000581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.878 [2024-12-07 10:35:40.000592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:40.878 [2024-12-07 10:35:40.000605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.878 [2024-12-07 10:35:40.000639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.878 [2024-12-07 10:35:40.000653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.878 [2024-12-07 10:35:40.000663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:40.878 [2024-12-07 10:35:40.000674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.879 [2024-12-07 10:35:40.000713] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.879 [2024-12-07 10:35:40.000726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.879 [2024-12-07 10:35:40.000736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.879 [2024-12-07 10:35:40.000746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:40.879 [2024-12-07 10:35:40.000756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.879 [2024-12-07 10:35:40.037103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.879 [2024-12-07 10:35:40.037143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.879 [2024-12-07 10:35:40.037158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.383 ms 00:23:40.879 [2024-12-07 10:35:40.037169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.879 [2024-12-07 10:35:40.037297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.879 [2024-12-07 10:35:40.037312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.879 [2024-12-07 10:35:40.037323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:40.879 [2024-12-07 10:35:40.037333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.879 [2024-12-07 10:35:40.038438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.879 [2024-12-07 10:35:40.042574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.431 ms, result 0 00:23:40.879 [2024-12-07 10:35:40.043555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.879 [2024-12-07 10:35:40.061184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:41.814  [2024-12-07T10:35:42.544Z] Copying: 27/256 [MB] (27 MBps) [2024-12-07T10:35:43.478Z] Copying: 51/256 [MB] (24 MBps) [2024-12-07T10:35:44.414Z] Copying: 75/256 [MB] (23 MBps) [2024-12-07T10:35:45.352Z] Copying: 100/256 [MB] (24 MBps) [2024-12-07T10:35:46.286Z] Copying: 124/256 [MB] (23 MBps) [2024-12-07T10:35:47.218Z] Copying: 149/256 [MB] (24 MBps) [2024-12-07T10:35:48.152Z] Copying: 173/256 [MB] (24 MBps) [2024-12-07T10:35:49.529Z] Copying: 198/256 [MB] (25 MBps) [2024-12-07T10:35:50.466Z] Copying: 223/256 [MB] (24 MBps) [2024-12-07T10:35:50.466Z] Copying: 247/256 [MB] (24 MBps) [2024-12-07T10:35:51.037Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-07 10:35:50.728242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.684 [2024-12-07 10:35:50.760509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.760576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:51.684 [2024-12-07 10:35:50.760610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:51.684 [2024-12-07 10:35:50.760626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.760665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:51.684 [2024-12-07 10:35:50.765322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.765357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:51.684 [2024-12-07 10:35:50.765372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:23:51.684 [2024-12-07 10:35:50.765383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.765644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.765659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:51.684 [2024-12-07 10:35:50.765670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:23:51.684 [2024-12-07 10:35:50.765682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.768594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.768618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.684 [2024-12-07 10:35:50.768630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:23:51.684 [2024-12-07 10:35:50.768642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.774071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.774110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:51.684 [2024-12-07 10:35:50.774122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.413 ms 00:23:51.684 [2024-12-07 10:35:50.774132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.808866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.808911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.684 [2024-12-07 10:35:50.808925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.673 ms 00:23:51.684 [2024-12-07 10:35:50.808934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.829826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.829867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.684 [2024-12-07 10:35:50.829888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.832 ms 00:23:51.684 [2024-12-07 10:35:50.829897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.830091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.830107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.684 [2024-12-07 10:35:50.830130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:51.684 [2024-12-07 10:35:50.830140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.864549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.864586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:51.684 [2024-12-07 10:35:50.864598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.448 ms 00:23:51.684 [2024-12-07 10:35:50.864608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.898819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.898854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:51.684 [2024-12-07 10:35:50.898867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.195 ms 00:23:51.684 [2024-12-07 10:35:50.898876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.932613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.932647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.684 [2024-12-07 10:35:50.932674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.722 ms 00:23:51.684 [2024-12-07 10:35:50.932684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.966194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.684 [2024-12-07 10:35:50.966229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.684 [2024-12-07 10:35:50.966241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.470 ms 00:23:51.684 [2024-12-07 10:35:50.966250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.684 [2024-12-07 10:35:50.966319] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.684 [2024-12-07 10:35:50.966335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.684 [2024-12-07 10:35:50.966786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.966982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.685 [2024-12-07 10:35:50.967436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.685 [2024-12-07 10:35:50.967446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 06a87537-5a92-450d-8735-ed5d8c4b9fb5 00:23:51.685 [2024-12-07 10:35:50.967457] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:51.685 [2024-12-07 10:35:50.967468] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:51.685 [2024-12-07 10:35:50.967478] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:51.685 [2024-12-07 10:35:50.967489] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:51.685 [2024-12-07 10:35:50.967499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.685 [2024-12-07 10:35:50.967509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.685 [2024-12-07 10:35:50.967523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.685 [2024-12-07 10:35:50.967532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.685 [2024-12-07 10:35:50.967542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.685 [2024-12-07 10:35:50.967551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.685 [2024-12-07 10:35:50.967562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.685 [2024-12-07 10:35:50.967573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:23:51.685 [2024-12-07 10:35:50.967583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.685 [2024-12-07 10:35:50.986845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.685 [2024-12-07 10:35:50.986884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.685 [2024-12-07 10:35:50.986896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.272 ms 00:23:51.685 [2024-12-07 10:35:50.986906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.685 [2024-12-07 10:35:50.987553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.685 [2024-12-07 10:35:50.987574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.685 [2024-12-07 10:35:50.987586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:23:51.685 [2024-12-07 10:35:50.987597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.042576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.042614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.946 [2024-12-07 10:35:51.042628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.042644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.042742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.042754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.946 [2024-12-07 10:35:51.042766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.042776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.042830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.042843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.946 [2024-12-07 10:35:51.042854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.042864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.042888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.042898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.946 [2024-12-07 10:35:51.042908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.042917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.161953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.162012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.946 [2024-12-07 10:35:51.162026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.162036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.256375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.256426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.946 [2024-12-07 10:35:51.256440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.256450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.256532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.256543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.946 [2024-12-07 10:35:51.256553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.256563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.256593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.256609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.946 [2024-12-07 10:35:51.256619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.256629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.946 [2024-12-07 10:35:51.256732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.946 [2024-12-07 10:35:51.256745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.946 [2024-12-07 10:35:51.256755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.946 [2024-12-07 10:35:51.256765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.947 [2024-12-07 10:35:51.256816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.947 [2024-12-07 10:35:51.256828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.947 [2024-12-07 10:35:51.256843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.947 [2024-12-07 10:35:51.256854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.947 [2024-12-07 10:35:51.256895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.947 [2024-12-07 10:35:51.256907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.947 [2024-12-07 10:35:51.256917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.947 [2024-12-07 10:35:51.256927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.947 [2024-12-07 10:35:51.256968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.947 [2024-12-07 10:35:51.256984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.947 [2024-12-07 10:35:51.256994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.947 [2024-12-07 10:35:51.257004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.947 [2024-12-07 10:35:51.257171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.487 ms, result 0 00:23:53.325 00:23:53.325 00:23:53.325 10:35:52 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.585 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:53.585 10:35:52 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78781 00:23:53.585 10:35:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78781 ']' 00:23:53.585 10:35:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78781 00:23:53.585 Process with pid 78781 is not found 00:23:53.585 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78781) - No such process 00:23:53.585 10:35:52 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78781 is not found' 00:23:53.585 00:23:53.585 real 1m11.338s 00:23:53.585 user 1m38.168s 00:23:53.585 sys 0m6.676s 00:23:53.585 10:35:52 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.585 10:35:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:53.585 ************************************ 00:23:53.585 END TEST ftl_trim 00:23:53.585 ************************************ 00:23:53.585 10:35:52 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.585 10:35:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:53.585 10:35:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.585 10:35:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.585 ************************************ 00:23:53.585 START TEST ftl_restore 00:23:53.585 ************************************ 00:23:53.585 10:35:52 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.845 * Looking for test storage... 00:23:53.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.845 10:35:53 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.845 --rc genhtml_branch_coverage=1 00:23:53.845 --rc genhtml_function_coverage=1 00:23:53.845 --rc genhtml_legend=1 00:23:53.845 --rc geninfo_all_blocks=1 00:23:53.845 --rc geninfo_unexecuted_blocks=1 00:23:53.845 00:23:53.845 ' 00:23:53.845 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.845 --rc genhtml_branch_coverage=1 00:23:53.845 --rc genhtml_function_coverage=1 00:23:53.845 --rc genhtml_legend=1 00:23:53.845 --rc geninfo_all_blocks=1 00:23:53.845 --rc geninfo_unexecuted_blocks=1 00:23:53.845 00:23:53.846 ' 00:23:53.846 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.846 --rc genhtml_branch_coverage=1 00:23:53.846 --rc genhtml_function_coverage=1 00:23:53.846 --rc genhtml_legend=1 00:23:53.846 --rc geninfo_all_blocks=1 00:23:53.846 --rc geninfo_unexecuted_blocks=1 00:23:53.846 00:23:53.846 ' 00:23:53.846 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.846 --rc genhtml_branch_coverage=1 00:23:53.846 --rc genhtml_function_coverage=1 00:23:53.846 --rc genhtml_legend=1 00:23:53.846 --rc geninfo_all_blocks=1 00:23:53.846 --rc geninfo_unexecuted_blocks=1 00:23:53.846 00:23:53.846 ' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.T8y7EZKWr7 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:53.846 10:35:53 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:54.106 10:35:53 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79054 00:23:54.106 10:35:53 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:54.106 10:35:53 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79054 00:23:54.106 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79054 ']' 00:23:54.106 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.106 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.106 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.106 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.106 10:35:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:54.106 [2024-12-07 10:35:53.310849] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:54.106 [2024-12-07 10:35:53.310990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79054 ] 00:23:54.366 [2024-12-07 10:35:53.493728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.366 [2024-12-07 10:35:53.600499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.303 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.303 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:55.303 10:35:54 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:55.303 10:35:54 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:55.303 10:35:54 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:55.303 10:35:54 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:55.303 10:35:54 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:55.303 10:35:54 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:55.562 10:35:54 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:55.562 10:35:54 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:55.562 10:35:54 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:55.562 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:55.562 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:55.562 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:55.562 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:55.562 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:55.821 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:55.821 { 00:23:55.821 "name": "nvme0n1", 00:23:55.821 "aliases": [ 00:23:55.821 "406ff387-ecf3-40ef-bd80-e01100806d5e" 00:23:55.821 ], 00:23:55.821 "product_name": "NVMe disk", 00:23:55.821 "block_size": 4096, 00:23:55.821 "num_blocks": 1310720, 00:23:55.821 "uuid": "406ff387-ecf3-40ef-bd80-e01100806d5e", 00:23:55.821 "numa_id": -1, 00:23:55.821 "assigned_rate_limits": { 00:23:55.821 "rw_ios_per_sec": 0, 00:23:55.821 "rw_mbytes_per_sec": 0, 00:23:55.821 "r_mbytes_per_sec": 0, 00:23:55.821 "w_mbytes_per_sec": 0 00:23:55.821 }, 00:23:55.821 "claimed": true, 00:23:55.821 "claim_type": "read_many_write_one", 00:23:55.821 "zoned": false, 00:23:55.821 "supported_io_types": { 00:23:55.821 "read": true, 00:23:55.821 "write": true, 00:23:55.821 "unmap": true, 00:23:55.821 "flush": true, 00:23:55.821 "reset": true, 00:23:55.821 "nvme_admin": true, 00:23:55.821 "nvme_io": true, 00:23:55.821 "nvme_io_md": false, 00:23:55.821 "write_zeroes": true, 00:23:55.821 "zcopy": false, 00:23:55.821 "get_zone_info": false, 00:23:55.821 "zone_management": false, 00:23:55.821 "zone_append": false, 00:23:55.821 "compare": true, 00:23:55.821 "compare_and_write": false, 00:23:55.821 "abort": true, 00:23:55.821 "seek_hole": false, 00:23:55.821 "seek_data": false, 00:23:55.821 "copy": true, 00:23:55.821 "nvme_iov_md": false 00:23:55.821 }, 00:23:55.821 "driver_specific": { 00:23:55.821 "nvme": [ 00:23:55.821 { 00:23:55.821 "pci_address": "0000:00:11.0", 00:23:55.821 "trid": { 00:23:55.821 "trtype": "PCIe", 00:23:55.821 "traddr": "0000:00:11.0" 00:23:55.821 }, 00:23:55.821 "ctrlr_data": { 00:23:55.821 "cntlid": 0, 00:23:55.821 "vendor_id": "0x1b36", 00:23:55.822 "model_number": "QEMU NVMe Ctrl", 00:23:55.822 "serial_number": "12341", 00:23:55.822 "firmware_revision": "8.0.0", 00:23:55.822 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:55.822 "oacs": { 00:23:55.822 "security": 0, 00:23:55.822 "format": 1, 00:23:55.822 "firmware": 0, 00:23:55.822 "ns_manage": 1 00:23:55.822 }, 00:23:55.822 "multi_ctrlr": false, 00:23:55.822 "ana_reporting": false 00:23:55.822 }, 00:23:55.822 "vs": { 00:23:55.822 "nvme_version": "1.4" 00:23:55.822 }, 00:23:55.822 "ns_data": { 00:23:55.822 "id": 1, 00:23:55.822 "can_share": false 00:23:55.822 } 00:23:55.822 } 00:23:55.822 ], 00:23:55.822 "mp_policy": "active_passive" 00:23:55.822 } 00:23:55.822 } 00:23:55.822 ]' 00:23:55.822 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:55.822 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:55.822 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:55.822 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:55.822 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:55.822 10:35:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:55.822 10:35:54 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:55.822 10:35:54 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:55.822 10:35:54 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:55.822 10:35:54 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:55.822 10:35:54 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:56.080 10:35:55 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a7765099-84bd-474a-b32b-dcf03486312e 00:23:56.080 10:35:55 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:56.080 10:35:55 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7765099-84bd-474a-b32b-dcf03486312e 00:23:56.080 10:35:55 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:56.338 10:35:55 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b0a71376-2ebc-4dd2-a8b0-a0d7be59b518 00:23:56.338 10:35:55 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b0a71376-2ebc-4dd2-a8b0-a0d7be59b518 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:56.597 10:35:55 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:56.597 10:35:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:56.597 10:35:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.597 10:35:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:56.597 10:35:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:56.597 10:35:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.856 { 00:23:56.856 "name": "2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea", 00:23:56.856 "aliases": [ 00:23:56.856 "lvs/nvme0n1p0" 00:23:56.856 ], 00:23:56.856 "product_name": "Logical Volume", 00:23:56.856 "block_size": 4096, 00:23:56.856 "num_blocks": 26476544, 00:23:56.856 "uuid": "2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea", 00:23:56.856 "assigned_rate_limits": { 00:23:56.856 "rw_ios_per_sec": 0, 00:23:56.856 "rw_mbytes_per_sec": 0, 00:23:56.856 "r_mbytes_per_sec": 0, 00:23:56.856 "w_mbytes_per_sec": 0 00:23:56.856 }, 00:23:56.856 "claimed": false, 00:23:56.856 "zoned": false, 00:23:56.856 "supported_io_types": { 00:23:56.856 "read": true, 00:23:56.856 "write": true, 00:23:56.856 "unmap": true, 00:23:56.856 "flush": false, 00:23:56.856 "reset": true, 00:23:56.856 "nvme_admin": false, 00:23:56.856 "nvme_io": false, 00:23:56.856 "nvme_io_md": false, 00:23:56.856 "write_zeroes": true, 00:23:56.856 "zcopy": false, 00:23:56.856 "get_zone_info": false, 00:23:56.856 "zone_management": false, 00:23:56.856 "zone_append": false, 00:23:56.856 "compare": false, 00:23:56.856 "compare_and_write": false, 00:23:56.856 "abort": false, 00:23:56.856 "seek_hole": true, 00:23:56.856 "seek_data": true, 00:23:56.856 "copy": false, 00:23:56.856 "nvme_iov_md": false 00:23:56.856 }, 00:23:56.856 "driver_specific": { 00:23:56.856 "lvol": { 00:23:56.856 "lvol_store_uuid": "b0a71376-2ebc-4dd2-a8b0-a0d7be59b518", 00:23:56.856 "base_bdev": "nvme0n1", 00:23:56.856 "thin_provision": true, 00:23:56.856 "num_allocated_clusters": 0, 00:23:56.856 "snapshot": false, 00:23:56.856 "clone": false, 00:23:56.856 "esnap_clone": false 00:23:56.856 } 00:23:56.856 } 00:23:56.856 } 00:23:56.856 ]' 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.856 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.856 10:35:56 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:56.856 10:35:56 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:56.856 10:35:56 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:57.115 10:35:56 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:57.115 10:35:56 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:57.115 10:35:56 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:57.115 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:57.115 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.115 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:57.115 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:57.115 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:57.471 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.471 { 00:23:57.471 "name": "2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea", 00:23:57.471 "aliases": [ 00:23:57.471 "lvs/nvme0n1p0" 00:23:57.471 ], 00:23:57.471 "product_name": "Logical Volume", 00:23:57.471 "block_size": 4096, 00:23:57.471 "num_blocks": 26476544, 00:23:57.471 "uuid": "2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea", 00:23:57.471 "assigned_rate_limits": { 00:23:57.471 "rw_ios_per_sec": 0, 00:23:57.471 "rw_mbytes_per_sec": 0, 00:23:57.471 "r_mbytes_per_sec": 0, 00:23:57.471 "w_mbytes_per_sec": 0 00:23:57.471 }, 00:23:57.471 "claimed": false, 00:23:57.471 "zoned": false, 00:23:57.471 "supported_io_types": { 00:23:57.471 "read": true, 00:23:57.471 "write": true, 00:23:57.471 "unmap": true, 00:23:57.471 "flush": false, 00:23:57.471 "reset": true, 00:23:57.471 "nvme_admin": false, 00:23:57.471 "nvme_io": false, 00:23:57.471 "nvme_io_md": false, 00:23:57.471 "write_zeroes": true, 00:23:57.471 "zcopy": false, 00:23:57.471 "get_zone_info": false, 00:23:57.471 "zone_management": false, 00:23:57.471 "zone_append": false, 00:23:57.471 "compare": false, 00:23:57.471 "compare_and_write": false, 00:23:57.471 "abort": false, 00:23:57.471 "seek_hole": true, 00:23:57.471 "seek_data": true, 00:23:57.471 "copy": false, 00:23:57.471 "nvme_iov_md": false 00:23:57.472 }, 00:23:57.472 "driver_specific": { 00:23:57.472 "lvol": { 00:23:57.472 "lvol_store_uuid": "b0a71376-2ebc-4dd2-a8b0-a0d7be59b518", 00:23:57.472 "base_bdev": "nvme0n1", 00:23:57.472 "thin_provision": true, 00:23:57.472 "num_allocated_clusters": 0, 00:23:57.472 "snapshot": false, 00:23:57.472 "clone": false, 00:23:57.472 "esnap_clone": false 00:23:57.472 } 00:23:57.472 } 00:23:57.472 } 00:23:57.472 ]' 00:23:57.472 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.472 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.472 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:57.472 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:57.472 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:57.472 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:57.472 10:35:56 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:57.472 10:35:56 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:57.732 10:35:56 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:57.732 10:35:56 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:57.732 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:57.732 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.732 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:57.732 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:57.732 10:35:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea 00:23:57.732 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.732 { 00:23:57.732 "name": "2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea", 00:23:57.732 "aliases": [ 00:23:57.732 "lvs/nvme0n1p0" 00:23:57.732 ], 00:23:57.732 "product_name": "Logical Volume", 00:23:57.732 "block_size": 4096, 00:23:57.732 "num_blocks": 26476544, 00:23:57.732 "uuid": "2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea", 00:23:57.732 "assigned_rate_limits": { 00:23:57.732 "rw_ios_per_sec": 0, 00:23:57.732 "rw_mbytes_per_sec": 0, 00:23:57.732 "r_mbytes_per_sec": 0, 00:23:57.732 "w_mbytes_per_sec": 0 00:23:57.732 }, 00:23:57.732 "claimed": false, 00:23:57.732 "zoned": false, 00:23:57.732 "supported_io_types": { 00:23:57.732 "read": true, 00:23:57.732 "write": true, 00:23:57.732 "unmap": true, 00:23:57.732 "flush": false, 00:23:57.732 "reset": true, 00:23:57.732 "nvme_admin": false, 00:23:57.732 "nvme_io": false, 00:23:57.732 "nvme_io_md": false, 00:23:57.732 "write_zeroes": true, 00:23:57.732 "zcopy": false, 00:23:57.732 "get_zone_info": false, 00:23:57.732 "zone_management": false, 00:23:57.732 "zone_append": false, 00:23:57.732 "compare": false, 00:23:57.732 "compare_and_write": false, 00:23:57.732 "abort": false, 00:23:57.732 "seek_hole": true, 00:23:57.732 "seek_data": true, 00:23:57.732 "copy": false, 00:23:57.732 "nvme_iov_md": false 00:23:57.732 }, 00:23:57.732 "driver_specific": { 00:23:57.732 "lvol": { 00:23:57.732 "lvol_store_uuid": "b0a71376-2ebc-4dd2-a8b0-a0d7be59b518", 00:23:57.732 "base_bdev": "nvme0n1", 00:23:57.732 "thin_provision": true, 00:23:57.732 "num_allocated_clusters": 0, 00:23:57.732 "snapshot": false, 00:23:57.732 "clone": false, 00:23:57.732 "esnap_clone": false 00:23:57.732 } 00:23:57.732 } 00:23:57.732 } 00:23:57.732 ]' 00:23:57.732 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.990 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.990 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:57.990 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:57.990 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:57.990 10:35:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea --l2p_dram_limit 10' 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:57.990 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:57.990 10:35:57 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d50f2d1-0e71-4ea0-80d2-af98e2ebb3ea --l2p_dram_limit 10 -c nvc0n1p0 00:23:58.249 [2024-12-07 10:35:57.348055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.249 [2024-12-07 10:35:57.348104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:58.249 [2024-12-07 10:35:57.348123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:58.249 [2024-12-07 10:35:57.348135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.249 [2024-12-07 10:35:57.348209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.249 [2024-12-07 10:35:57.348222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.249 [2024-12-07 10:35:57.348236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:58.249 [2024-12-07 10:35:57.348246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.249 [2024-12-07 10:35:57.348276] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:58.249 [2024-12-07 10:35:57.349272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:58.249 [2024-12-07 10:35:57.349303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.249 [2024-12-07 10:35:57.349315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.249 [2024-12-07 10:35:57.349328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:23:58.249 [2024-12-07 10:35:57.349338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.249 [2024-12-07 10:35:57.349417] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 812d4f29-38c0-44f5-af4a-828d2ebd97c9 00:23:58.250 [2024-12-07 10:35:57.350886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.351085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:58.250 [2024-12-07 10:35:57.351107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:58.250 [2024-12-07 10:35:57.351121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.358908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.359078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.250 [2024-12-07 10:35:57.359099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.747 ms 00:23:58.250 [2024-12-07 10:35:57.359113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.359243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.359260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.250 [2024-12-07 10:35:57.359272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:58.250 [2024-12-07 10:35:57.359289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.359380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.359398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:58.250 [2024-12-07 10:35:57.359412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:58.250 [2024-12-07 10:35:57.359424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.359449] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:58.250 [2024-12-07 10:35:57.364624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.364654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.250 [2024-12-07 10:35:57.364670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.187 ms 00:23:58.250 [2024-12-07 10:35:57.364680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.364716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.364727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:58.250 [2024-12-07 10:35:57.364739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:58.250 [2024-12-07 10:35:57.364749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.364784] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:58.250 [2024-12-07 10:35:57.364908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:58.250 [2024-12-07 10:35:57.364927] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:58.250 [2024-12-07 10:35:57.364940] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:58.250 [2024-12-07 10:35:57.364955] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:58.250 [2024-12-07 10:35:57.364966] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365001] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:58.250 [2024-12-07 10:35:57.365027] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:58.250 [2024-12-07 10:35:57.365044] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:58.250 [2024-12-07 10:35:57.365054] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:58.250 [2024-12-07 10:35:57.365067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.365086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:58.250 [2024-12-07 10:35:57.365100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:58.250 [2024-12-07 10:35:57.365110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.365187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.250 [2024-12-07 10:35:57.365198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:58.250 [2024-12-07 10:35:57.365211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:58.250 [2024-12-07 10:35:57.365220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.250 [2024-12-07 10:35:57.365315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:58.250 [2024-12-07 10:35:57.365329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:58.250 [2024-12-07 10:35:57.365343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:58.250 [2024-12-07 10:35:57.365375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:58.250 [2024-12-07 10:35:57.365408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.250 [2024-12-07 10:35:57.365430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:58.250 [2024-12-07 10:35:57.365440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:58.250 [2024-12-07 10:35:57.365452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.250 [2024-12-07 10:35:57.365461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:58.250 [2024-12-07 10:35:57.365473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:58.250 [2024-12-07 10:35:57.365483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:58.250 [2024-12-07 10:35:57.365506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:58.250 [2024-12-07 10:35:57.365538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:58.250 [2024-12-07 10:35:57.365568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:58.250 [2024-12-07 10:35:57.365600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:58.250 [2024-12-07 10:35:57.365630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.250 [2024-12-07 10:35:57.365650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:58.250 [2024-12-07 10:35:57.365663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:58.250 [2024-12-07 10:35:57.365672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.250 [2024-12-07 10:35:57.365683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:58.250 [2024-12-07 10:35:57.365692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:58.250 [2024-12-07 10:35:57.365705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.251 [2024-12-07 10:35:57.365714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:58.251 [2024-12-07 10:35:57.365725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:58.251 [2024-12-07 10:35:57.365734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.251 [2024-12-07 10:35:57.365745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:58.251 [2024-12-07 10:35:57.365755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:58.251 [2024-12-07 10:35:57.365765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.251 [2024-12-07 10:35:57.365774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:58.251 [2024-12-07 10:35:57.365786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:58.251 [2024-12-07 10:35:57.365796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.251 [2024-12-07 10:35:57.365808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.251 [2024-12-07 10:35:57.365819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:58.251 [2024-12-07 10:35:57.365833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:58.251 [2024-12-07 10:35:57.365842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:58.251 [2024-12-07 10:35:57.365854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:58.251 [2024-12-07 10:35:57.365863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:58.251 [2024-12-07 10:35:57.365875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:58.251 [2024-12-07 10:35:57.365886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:58.251 [2024-12-07 10:35:57.365904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.365916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:58.251 [2024-12-07 10:35:57.365928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:58.251 [2024-12-07 10:35:57.365938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:58.251 [2024-12-07 10:35:57.365951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:58.251 [2024-12-07 10:35:57.365960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:58.251 [2024-12-07 10:35:57.365972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:58.251 [2024-12-07 10:35:57.365983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:58.251 [2024-12-07 10:35:57.366007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:58.251 [2024-12-07 10:35:57.366020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:58.251 [2024-12-07 10:35:57.366036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.366046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.366059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.366070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.366083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:58.251 [2024-12-07 10:35:57.366092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:58.251 [2024-12-07 10:35:57.366106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.366117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:58.251 [2024-12-07 10:35:57.366131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:58.251 [2024-12-07 10:35:57.366140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:58.251 [2024-12-07 10:35:57.366154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:58.251 [2024-12-07 10:35:57.366164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.251 [2024-12-07 10:35:57.366187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:58.251 [2024-12-07 10:35:57.366197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:23:58.251 [2024-12-07 10:35:57.366210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.251 [2024-12-07 10:35:57.366248] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:58.251 [2024-12-07 10:35:57.366264] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:02.444 [2024-12-07 10:36:00.910244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.910498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:02.444 [2024-12-07 10:36:00.910595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3549.748 ms 00:24:02.444 [2024-12-07 10:36:00.910654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:00.944692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.944914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.444 [2024-12-07 10:36:00.945053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.670 ms 00:24:02.444 [2024-12-07 10:36:00.945098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:00.945249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.945396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:02.444 [2024-12-07 10:36:00.945490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:02.444 [2024-12-07 10:36:00.945531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:00.990635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.990828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.444 [2024-12-07 10:36:00.990958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.108 ms 00:24:02.444 [2024-12-07 10:36:00.991019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:00.991080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.991124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.444 [2024-12-07 10:36:00.991156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.444 [2024-12-07 10:36:00.991267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:00.991809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.991961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.444 [2024-12-07 10:36:00.992062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:24:02.444 [2024-12-07 10:36:00.992104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:00.992232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:00.992405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.444 [2024-12-07 10:36:00.992448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:02.444 [2024-12-07 10:36:00.992485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.012612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.012756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.444 [2024-12-07 10:36:01.012877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.112 ms 00:24:02.444 [2024-12-07 10:36:01.012916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.037225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.444 [2024-12-07 10:36:01.040555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.040680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.444 [2024-12-07 10:36:01.040815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.566 ms 00:24:02.444 [2024-12-07 10:36:01.040830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.136996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.137051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:02.444 [2024-12-07 10:36:01.137070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.284 ms 00:24:02.444 [2024-12-07 10:36:01.137080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.137257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.137273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:02.444 [2024-12-07 10:36:01.137290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:02.444 [2024-12-07 10:36:01.137299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.172006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.172044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:02.444 [2024-12-07 10:36:01.172060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.711 ms 00:24:02.444 [2024-12-07 10:36:01.172079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.206150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.206303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:02.444 [2024-12-07 10:36:01.206329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.076 ms 00:24:02.444 [2024-12-07 10:36:01.206339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.207141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.207160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:02.444 [2024-12-07 10:36:01.207175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:24:02.444 [2024-12-07 10:36:01.207189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.303730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.303885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:02.444 [2024-12-07 10:36:01.303929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.637 ms 00:24:02.444 [2024-12-07 10:36:01.303941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.339519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.339557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:02.444 [2024-12-07 10:36:01.339574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.465 ms 00:24:02.444 [2024-12-07 10:36:01.339583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.373926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.373960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:02.444 [2024-12-07 10:36:01.373986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.355 ms 00:24:02.444 [2024-12-07 10:36:01.373996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.408062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.408209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:02.444 [2024-12-07 10:36:01.408250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.059 ms 00:24:02.444 [2024-12-07 10:36:01.408260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.408305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.408317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:02.444 [2024-12-07 10:36:01.408334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:02.444 [2024-12-07 10:36:01.408344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.444 [2024-12-07 10:36:01.408458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.444 [2024-12-07 10:36:01.408474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:02.444 [2024-12-07 10:36:01.408487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:02.444 [2024-12-07 10:36:01.408497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.445 [2024-12-07 10:36:01.409505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4067.646 ms, result 0 00:24:02.445 { 00:24:02.445 "name": "ftl0", 00:24:02.445 "uuid": "812d4f29-38c0-44f5-af4a-828d2ebd97c9" 00:24:02.445 } 00:24:02.445 10:36:01 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:02.445 10:36:01 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:02.445 10:36:01 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:02.445 10:36:01 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:02.706 [2024-12-07 10:36:01.812276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.812331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.706 [2024-12-07 10:36:01.812347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.706 [2024-12-07 10:36:01.812360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.812384] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.706 [2024-12-07 10:36:01.816367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.816399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.706 [2024-12-07 10:36:01.816414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.968 ms 00:24:02.706 [2024-12-07 10:36:01.816425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.816656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.816672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.706 [2024-12-07 10:36:01.816685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:24:02.706 [2024-12-07 10:36:01.816694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.819057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.819071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.706 [2024-12-07 10:36:01.819085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:24:02.706 [2024-12-07 10:36:01.819095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.824141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.824292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.706 [2024-12-07 10:36:01.824381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:24:02.706 [2024-12-07 10:36:01.824417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.859199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.859338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.706 [2024-12-07 10:36:01.859457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.678 ms 00:24:02.706 [2024-12-07 10:36:01.859493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.881386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.881524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.706 [2024-12-07 10:36:01.881657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.861 ms 00:24:02.706 [2024-12-07 10:36:01.881693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.881898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.882130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.706 [2024-12-07 10:36:01.882174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:02.706 [2024-12-07 10:36:01.882204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.916570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.916721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.706 [2024-12-07 10:36:01.916833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.369 ms 00:24:02.706 [2024-12-07 10:36:01.916869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.951238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.951387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.706 [2024-12-07 10:36:01.951491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.360 ms 00:24:02.706 [2024-12-07 10:36:01.951527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:01.984884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:01.985042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.706 [2024-12-07 10:36:01.985148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.319 ms 00:24:02.706 [2024-12-07 10:36:01.985184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:02.018883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.706 [2024-12-07 10:36:02.019039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.706 [2024-12-07 10:36:02.019142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.634 ms 00:24:02.706 [2024-12-07 10:36:02.019194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.706 [2024-12-07 10:36:02.019291] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.706 [2024-12-07 10:36:02.019336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.019939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.020007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.020112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.706 [2024-12-07 10:36:02.020163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.020974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.021987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.707 [2024-12-07 10:36:02.022483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.708 [2024-12-07 10:36:02.022493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.708 [2024-12-07 10:36:02.022507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.708 [2024-12-07 10:36:02.022524] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.708 [2024-12-07 10:36:02.022536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 812d4f29-38c0-44f5-af4a-828d2ebd97c9 00:24:02.708 [2024-12-07 10:36:02.022546] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:02.708 [2024-12-07 10:36:02.022571] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:02.708 [2024-12-07 10:36:02.022583] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:02.708 [2024-12-07 10:36:02.022596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:02.708 [2024-12-07 10:36:02.022605] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.708 [2024-12-07 10:36:02.022618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.708 [2024-12-07 10:36:02.022627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.708 [2024-12-07 10:36:02.022638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.708 [2024-12-07 10:36:02.022647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.708 [2024-12-07 10:36:02.022659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.708 [2024-12-07 10:36:02.022669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.708 [2024-12-07 10:36:02.022683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:24:02.708 [2024-12-07 10:36:02.022695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.708 [2024-12-07 10:36:02.041156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.708 [2024-12-07 10:36:02.041192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.708 [2024-12-07 10:36:02.041207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.420 ms 00:24:02.708 [2024-12-07 10:36:02.041216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.708 [2024-12-07 10:36:02.041662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.708 [2024-12-07 10:36:02.041673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.708 [2024-12-07 10:36:02.041689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:24:02.708 [2024-12-07 10:36:02.041698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.967 [2024-12-07 10:36:02.104185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.967 [2024-12-07 10:36:02.104356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.967 [2024-12-07 10:36:02.104382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.967 [2024-12-07 10:36:02.104393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.967 [2024-12-07 10:36:02.104450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.967 [2024-12-07 10:36:02.104461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.967 [2024-12-07 10:36:02.104477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.967 [2024-12-07 10:36:02.104488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.967 [2024-12-07 10:36:02.104572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.967 [2024-12-07 10:36:02.104586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.967 [2024-12-07 10:36:02.104599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.967 [2024-12-07 10:36:02.104609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.967 [2024-12-07 10:36:02.104633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.967 [2024-12-07 10:36:02.104644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.967 [2024-12-07 10:36:02.104656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.967 [2024-12-07 10:36:02.104669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.967 [2024-12-07 10:36:02.223146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.223214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.968 [2024-12-07 10:36:02.223233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.223243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.317588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.317637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.968 [2024-12-07 10:36:02.317655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.317685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.317796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.317809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.968 [2024-12-07 10:36:02.317822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.317833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.317890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.317902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.968 [2024-12-07 10:36:02.317914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.317925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.318072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.318088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.968 [2024-12-07 10:36:02.318101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.318111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.318156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.318169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.968 [2024-12-07 10:36:02.318183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.318193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.318238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.318249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.968 [2024-12-07 10:36:02.318262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.318273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.318321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.968 [2024-12-07 10:36:02.318334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.968 [2024-12-07 10:36:02.318347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.968 [2024-12-07 10:36:02.318357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.968 [2024-12-07 10:36:02.318491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.005 ms, result 0 00:24:03.227 true 00:24:03.227 10:36:02 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79054 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79054 ']' 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79054 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79054 00:24:03.227 killing process with pid 79054 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79054' 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79054 00:24:03.227 10:36:02 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79054 00:24:08.497 10:36:06 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:11.788 262144+0 records in 00:24:11.788 262144+0 records out 00:24:11.788 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00397 s, 268 MB/s 00:24:11.788 10:36:10 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:13.696 10:36:12 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:13.696 [2024-12-07 10:36:12.678860] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:13.696 [2024-12-07 10:36:12.679017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79290 ] 00:24:13.696 [2024-12-07 10:36:12.865375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.696 [2024-12-07 10:36:12.978047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.264 [2024-12-07 10:36:13.329572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.264 [2024-12-07 10:36:13.329641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.264 [2024-12-07 10:36:13.492379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.264 [2024-12-07 10:36:13.492636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:14.264 [2024-12-07 10:36:13.492661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:14.264 [2024-12-07 10:36:13.492673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.264 [2024-12-07 10:36:13.492735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.264 [2024-12-07 10:36:13.492751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.264 [2024-12-07 10:36:13.492762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:14.264 [2024-12-07 10:36:13.492773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.264 [2024-12-07 10:36:13.492796] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:14.265 [2024-12-07 10:36:13.493934] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:14.265 [2024-12-07 10:36:13.493960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.493971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.265 [2024-12-07 10:36:13.494010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:24:14.265 [2024-12-07 10:36:13.494020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.495482] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:14.265 [2024-12-07 10:36:13.513406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.513543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:14.265 [2024-12-07 10:36:13.513563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.954 ms 00:24:14.265 [2024-12-07 10:36:13.513589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.513654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.513666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:14.265 [2024-12-07 10:36:13.513679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:14.265 [2024-12-07 10:36:13.513689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.520438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.520570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.265 [2024-12-07 10:36:13.520588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.688 ms 00:24:14.265 [2024-12-07 10:36:13.520620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.520700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.520713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.265 [2024-12-07 10:36:13.520724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:14.265 [2024-12-07 10:36:13.520734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.520775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.520787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:14.265 [2024-12-07 10:36:13.520797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:14.265 [2024-12-07 10:36:13.520806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.520834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:14.265 [2024-12-07 10:36:13.525603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.525632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.265 [2024-12-07 10:36:13.525647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:24:14.265 [2024-12-07 10:36:13.525673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.525707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.525718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:14.265 [2024-12-07 10:36:13.525727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:14.265 [2024-12-07 10:36:13.525737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.525786] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:14.265 [2024-12-07 10:36:13.525811] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:14.265 [2024-12-07 10:36:13.525845] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:14.265 [2024-12-07 10:36:13.525865] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:14.265 [2024-12-07 10:36:13.525953] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:14.265 [2024-12-07 10:36:13.525966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:14.265 [2024-12-07 10:36:13.525979] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:14.265 [2024-12-07 10:36:13.526010] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526035] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:14.265 [2024-12-07 10:36:13.526045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:14.265 [2024-12-07 10:36:13.526058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:14.265 [2024-12-07 10:36:13.526067] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:14.265 [2024-12-07 10:36:13.526077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.526088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:14.265 [2024-12-07 10:36:13.526098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:24:14.265 [2024-12-07 10:36:13.526107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.526177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.265 [2024-12-07 10:36:13.526187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:14.265 [2024-12-07 10:36:13.526197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:14.265 [2024-12-07 10:36:13.526206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.265 [2024-12-07 10:36:13.526300] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:14.265 [2024-12-07 10:36:13.526315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:14.265 [2024-12-07 10:36:13.526326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:14.265 [2024-12-07 10:36:13.526355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:14.265 [2024-12-07 10:36:13.526383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.265 [2024-12-07 10:36:13.526401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:14.265 [2024-12-07 10:36:13.526411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:14.265 [2024-12-07 10:36:13.526419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.265 [2024-12-07 10:36:13.526454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:14.265 [2024-12-07 10:36:13.526463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:14.265 [2024-12-07 10:36:13.526473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:14.265 [2024-12-07 10:36:13.526493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:14.265 [2024-12-07 10:36:13.526521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:14.265 [2024-12-07 10:36:13.526548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:14.265 [2024-12-07 10:36:13.526584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:14.265 [2024-12-07 10:36:13.526611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:14.265 [2024-12-07 10:36:13.526639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.265 [2024-12-07 10:36:13.526657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:14.265 [2024-12-07 10:36:13.526666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:14.265 [2024-12-07 10:36:13.526675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.265 [2024-12-07 10:36:13.526684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:14.265 [2024-12-07 10:36:13.526694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:14.265 [2024-12-07 10:36:13.526704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:14.265 [2024-12-07 10:36:13.526722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:14.265 [2024-12-07 10:36:13.526731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.265 [2024-12-07 10:36:13.526740] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:14.265 [2024-12-07 10:36:13.526750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:14.265 [2024-12-07 10:36:13.526759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.265 [2024-12-07 10:36:13.526769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.266 [2024-12-07 10:36:13.526779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:14.266 [2024-12-07 10:36:13.526788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:14.266 [2024-12-07 10:36:13.526796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:14.266 [2024-12-07 10:36:13.526806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:14.266 [2024-12-07 10:36:13.526814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:14.266 [2024-12-07 10:36:13.526824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:14.266 [2024-12-07 10:36:13.526834] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:14.266 [2024-12-07 10:36:13.526846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.526862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:14.266 [2024-12-07 10:36:13.526872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:14.266 [2024-12-07 10:36:13.526883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:14.266 [2024-12-07 10:36:13.526893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:14.266 [2024-12-07 10:36:13.526903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:14.266 [2024-12-07 10:36:13.526914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:14.266 [2024-12-07 10:36:13.526924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:14.266 [2024-12-07 10:36:13.526935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:14.266 [2024-12-07 10:36:13.526946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:14.266 [2024-12-07 10:36:13.526956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.526967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.526988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.526999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.527010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:14.266 [2024-12-07 10:36:13.527020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:14.266 [2024-12-07 10:36:13.527031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.527042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:14.266 [2024-12-07 10:36:13.527053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:14.266 [2024-12-07 10:36:13.527064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:14.266 [2024-12-07 10:36:13.527074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:14.266 [2024-12-07 10:36:13.527085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.266 [2024-12-07 10:36:13.527096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:14.266 [2024-12-07 10:36:13.527106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:24:14.266 [2024-12-07 10:36:13.527116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.266 [2024-12-07 10:36:13.564653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.266 [2024-12-07 10:36:13.564688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.266 [2024-12-07 10:36:13.564701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.551 ms 00:24:14.266 [2024-12-07 10:36:13.564715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.266 [2024-12-07 10:36:13.564785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.266 [2024-12-07 10:36:13.564796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.266 [2024-12-07 10:36:13.564806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:14.266 [2024-12-07 10:36:13.564815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.525 [2024-12-07 10:36:13.621257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.525 [2024-12-07 10:36:13.621411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.525 [2024-12-07 10:36:13.621433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.478 ms 00:24:14.525 [2024-12-07 10:36:13.621444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.525 [2024-12-07 10:36:13.621478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.525 [2024-12-07 10:36:13.621491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.525 [2024-12-07 10:36:13.621508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:14.525 [2024-12-07 10:36:13.621519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.525 [2024-12-07 10:36:13.622025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.525 [2024-12-07 10:36:13.622041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.525 [2024-12-07 10:36:13.622053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:24:14.525 [2024-12-07 10:36:13.622063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.525 [2024-12-07 10:36:13.622180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.525 [2024-12-07 10:36:13.622194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.525 [2024-12-07 10:36:13.622210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:14.525 [2024-12-07 10:36:13.622220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.525 [2024-12-07 10:36:13.641028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.641063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.526 [2024-12-07 10:36:13.641075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.819 ms 00:24:14.526 [2024-12-07 10:36:13.641100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.659785] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:14.526 [2024-12-07 10:36:13.659822] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:14.526 [2024-12-07 10:36:13.659836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.659847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:14.526 [2024-12-07 10:36:13.659857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.673 ms 00:24:14.526 [2024-12-07 10:36:13.659867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.688142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.688307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:14.526 [2024-12-07 10:36:13.688328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.282 ms 00:24:14.526 [2024-12-07 10:36:13.688355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.705907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.705940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:14.526 [2024-12-07 10:36:13.705952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.497 ms 00:24:14.526 [2024-12-07 10:36:13.705961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.722874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.722907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:14.526 [2024-12-07 10:36:13.722919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.896 ms 00:24:14.526 [2024-12-07 10:36:13.722929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.723743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.723774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.526 [2024-12-07 10:36:13.723786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:24:14.526 [2024-12-07 10:36:13.723803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.805211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.805283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:14.526 [2024-12-07 10:36:13.805301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.512 ms 00:24:14.526 [2024-12-07 10:36:13.805333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.815503] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:14.526 [2024-12-07 10:36:13.817951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.817987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.526 [2024-12-07 10:36:13.818001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.592 ms 00:24:14.526 [2024-12-07 10:36:13.818011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.818099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.818113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:14.526 [2024-12-07 10:36:13.818124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:14.526 [2024-12-07 10:36:13.818134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.818209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.818222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.526 [2024-12-07 10:36:13.818232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:14.526 [2024-12-07 10:36:13.818242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.818261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.818272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.526 [2024-12-07 10:36:13.818282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:14.526 [2024-12-07 10:36:13.818292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.818325] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:14.526 [2024-12-07 10:36:13.818340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.818350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:14.526 [2024-12-07 10:36:13.818359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:14.526 [2024-12-07 10:36:13.818369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.853472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.853507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.526 [2024-12-07 10:36:13.853520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.140 ms 00:24:14.526 [2024-12-07 10:36:13.853535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.853598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.526 [2024-12-07 10:36:13.853609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.526 [2024-12-07 10:36:13.853620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:14.526 [2024-12-07 10:36:13.853629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.526 [2024-12-07 10:36:13.854700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.480 ms, result 0 00:24:15.903  [2024-12-07T10:36:16.191Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-07T10:36:17.126Z] Copying: 48/1024 [MB] (24 MBps) [2024-12-07T10:36:18.060Z] Copying: 73/1024 [MB] (24 MBps) [2024-12-07T10:36:18.996Z] Copying: 99/1024 [MB] (26 MBps) [2024-12-07T10:36:19.936Z] Copying: 124/1024 [MB] (24 MBps) [2024-12-07T10:36:20.872Z] Copying: 149/1024 [MB] (24 MBps) [2024-12-07T10:36:22.246Z] Copying: 172/1024 [MB] (23 MBps) [2024-12-07T10:36:23.183Z] Copying: 196/1024 [MB] (23 MBps) [2024-12-07T10:36:24.119Z] Copying: 219/1024 [MB] (23 MBps) [2024-12-07T10:36:25.056Z] Copying: 243/1024 [MB] (24 MBps) [2024-12-07T10:36:26.001Z] Copying: 267/1024 [MB] (23 MBps) [2024-12-07T10:36:26.936Z] Copying: 292/1024 [MB] (24 MBps) [2024-12-07T10:36:27.872Z] Copying: 316/1024 [MB] (24 MBps) [2024-12-07T10:36:29.249Z] Copying: 340/1024 [MB] (24 MBps) [2024-12-07T10:36:30.184Z] Copying: 364/1024 [MB] (24 MBps) [2024-12-07T10:36:31.118Z] Copying: 388/1024 [MB] (23 MBps) [2024-12-07T10:36:32.056Z] Copying: 413/1024 [MB] (25 MBps) [2024-12-07T10:36:33.052Z] Copying: 438/1024 [MB] (24 MBps) [2024-12-07T10:36:34.011Z] Copying: 463/1024 [MB] (24 MBps) [2024-12-07T10:36:34.949Z] Copying: 487/1024 [MB] (24 MBps) [2024-12-07T10:36:35.886Z] Copying: 511/1024 [MB] (24 MBps) [2024-12-07T10:36:37.261Z] Copying: 535/1024 [MB] (24 MBps) [2024-12-07T10:36:38.196Z] Copying: 558/1024 [MB] (22 MBps) [2024-12-07T10:36:39.133Z] Copying: 582/1024 [MB] (24 MBps) [2024-12-07T10:36:40.071Z] Copying: 607/1024 [MB] (24 MBps) [2024-12-07T10:36:41.008Z] Copying: 631/1024 [MB] (23 MBps) [2024-12-07T10:36:41.947Z] Copying: 656/1024 [MB] (24 MBps) [2024-12-07T10:36:42.882Z] Copying: 680/1024 [MB] (24 MBps) [2024-12-07T10:36:44.255Z] Copying: 705/1024 [MB] (24 MBps) [2024-12-07T10:36:44.823Z] Copying: 729/1024 [MB] (24 MBps) [2024-12-07T10:36:46.198Z] Copying: 753/1024 [MB] (23 MBps) [2024-12-07T10:36:47.136Z] Copying: 776/1024 [MB] (23 MBps) [2024-12-07T10:36:48.075Z] Copying: 801/1024 [MB] (24 MBps) [2024-12-07T10:36:49.012Z] Copying: 826/1024 [MB] (25 MBps) [2024-12-07T10:36:49.949Z] Copying: 851/1024 [MB] (25 MBps) [2024-12-07T10:36:50.885Z] Copying: 876/1024 [MB] (25 MBps) [2024-12-07T10:36:51.822Z] Copying: 902/1024 [MB] (25 MBps) [2024-12-07T10:36:53.201Z] Copying: 926/1024 [MB] (24 MBps) [2024-12-07T10:36:54.138Z] Copying: 950/1024 [MB] (23 MBps) [2024-12-07T10:36:55.075Z] Copying: 974/1024 [MB] (24 MBps) [2024-12-07T10:36:56.010Z] Copying: 999/1024 [MB] (24 MBps) [2024-12-07T10:36:56.010Z] Copying: 1023/1024 [MB] (24 MBps) [2024-12-07T10:36:56.010Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-07 10:36:55.814480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.814527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:56.657 [2024-12-07 10:36:55.814544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:56.657 [2024-12-07 10:36:55.814556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.814589] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.657 [2024-12-07 10:36:55.818796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.818841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:56.657 [2024-12-07 10:36:55.818862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:24:56.657 [2024-12-07 10:36:55.818872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.820671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.820710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:56.657 [2024-12-07 10:36:55.820723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:24:56.657 [2024-12-07 10:36:55.820733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.838493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.838531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:56.657 [2024-12-07 10:36:55.838544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.771 ms 00:24:56.657 [2024-12-07 10:36:55.838570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.843401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.843432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:56.657 [2024-12-07 10:36:55.843444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:24:56.657 [2024-12-07 10:36:55.843469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.878575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.878739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:56.657 [2024-12-07 10:36:55.878760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.106 ms 00:24:56.657 [2024-12-07 10:36:55.878788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.899151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.899187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:56.657 [2024-12-07 10:36:55.899201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.360 ms 00:24:56.657 [2024-12-07 10:36:55.899226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.899351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.899367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:56.657 [2024-12-07 10:36:55.899378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:56.657 [2024-12-07 10:36:55.899387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.933645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.933686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:56.657 [2024-12-07 10:36:55.933697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.298 ms 00:24:56.657 [2024-12-07 10:36:55.933706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:55.967934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:55.967968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:56.657 [2024-12-07 10:36:55.967989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.247 ms 00:24:56.657 [2024-12-07 10:36:55.967999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.657 [2024-12-07 10:36:56.001156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.657 [2024-12-07 10:36:56.001190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:56.657 [2024-12-07 10:36:56.001202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.158 ms 00:24:56.657 [2024-12-07 10:36:56.001211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.917 [2024-12-07 10:36:56.035259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.917 [2024-12-07 10:36:56.035397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:56.917 [2024-12-07 10:36:56.035415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.033 ms 00:24:56.917 [2024-12-07 10:36:56.035441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.917 [2024-12-07 10:36:56.035503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:56.917 [2024-12-07 10:36:56.035520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.035996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:56.917 [2024-12-07 10:36:56.036164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:56.918 [2024-12-07 10:36:56.036598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:56.918 [2024-12-07 10:36:56.036611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 812d4f29-38c0-44f5-af4a-828d2ebd97c9 00:24:56.918 [2024-12-07 10:36:56.036622] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:56.918 [2024-12-07 10:36:56.036631] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:56.918 [2024-12-07 10:36:56.036640] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:56.918 [2024-12-07 10:36:56.036650] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:56.918 [2024-12-07 10:36:56.036659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:56.918 [2024-12-07 10:36:56.036678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:56.918 [2024-12-07 10:36:56.036688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:56.918 [2024-12-07 10:36:56.036696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:56.918 [2024-12-07 10:36:56.036705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:56.918 [2024-12-07 10:36:56.036714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.918 [2024-12-07 10:36:56.036724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:56.918 [2024-12-07 10:36:56.036734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:24:56.918 [2024-12-07 10:36:56.036744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.055883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.918 [2024-12-07 10:36:56.055915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:56.918 [2024-12-07 10:36:56.055926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.133 ms 00:24:56.918 [2024-12-07 10:36:56.055936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.056450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.918 [2024-12-07 10:36:56.056466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:56.918 [2024-12-07 10:36:56.056476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:24:56.918 [2024-12-07 10:36:56.056491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.106068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.918 [2024-12-07 10:36:56.106102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.918 [2024-12-07 10:36:56.106114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.918 [2024-12-07 10:36:56.106125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.106172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.918 [2024-12-07 10:36:56.106183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.918 [2024-12-07 10:36:56.106193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.918 [2024-12-07 10:36:56.106207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.106288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.918 [2024-12-07 10:36:56.106302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.918 [2024-12-07 10:36:56.106311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.918 [2024-12-07 10:36:56.106321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.106337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.918 [2024-12-07 10:36:56.106347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.918 [2024-12-07 10:36:56.106357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.918 [2024-12-07 10:36:56.106367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.918 [2024-12-07 10:36:56.222835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.918 [2024-12-07 10:36:56.222884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.918 [2024-12-07 10:36:56.222898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.918 [2024-12-07 10:36:56.222924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.177 [2024-12-07 10:36:56.320405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.177 [2024-12-07 10:36:56.320531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.177 [2024-12-07 10:36:56.320598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.177 [2024-12-07 10:36:56.320752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:57.177 [2024-12-07 10:36:56.320818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.177 [2024-12-07 10:36:56.320887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.320938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.177 [2024-12-07 10:36:56.320950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.177 [2024-12-07 10:36:56.320960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.177 [2024-12-07 10:36:56.320969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.177 [2024-12-07 10:36:56.321122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.440 ms, result 0 00:24:58.554 00:24:58.554 00:24:58.554 10:36:57 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:58.554 [2024-12-07 10:36:57.612601] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:58.554 [2024-12-07 10:36:57.612719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79747 ] 00:24:58.554 [2024-12-07 10:36:57.788769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.554 [2024-12-07 10:36:57.895265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.122 [2024-12-07 10:36:58.229996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.122 [2024-12-07 10:36:58.230081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.122 [2024-12-07 10:36:58.390554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.390750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:59.122 [2024-12-07 10:36:58.390774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:59.122 [2024-12-07 10:36:58.390785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.390847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.390862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.122 [2024-12-07 10:36:58.390872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:59.122 [2024-12-07 10:36:58.390883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.390906] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:59.122 [2024-12-07 10:36:58.391923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:59.122 [2024-12-07 10:36:58.391950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.391960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.122 [2024-12-07 10:36:58.391972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:24:59.122 [2024-12-07 10:36:58.391992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.393422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:59.122 [2024-12-07 10:36:58.411467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.411504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:59.122 [2024-12-07 10:36:58.411517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.075 ms 00:24:59.122 [2024-12-07 10:36:58.411526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.411592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.411604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:59.122 [2024-12-07 10:36:58.411615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:59.122 [2024-12-07 10:36:58.411624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.418540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.418569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.122 [2024-12-07 10:36:58.418584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.859 ms 00:24:59.122 [2024-12-07 10:36:58.418598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.418670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.418682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.122 [2024-12-07 10:36:58.418692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:59.122 [2024-12-07 10:36:58.418701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.418738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.418749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:59.122 [2024-12-07 10:36:58.418759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:59.122 [2024-12-07 10:36:58.418768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.418794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:59.122 [2024-12-07 10:36:58.423197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.423229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.122 [2024-12-07 10:36:58.423243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.414 ms 00:24:59.122 [2024-12-07 10:36:58.423269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.122 [2024-12-07 10:36:58.423302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.122 [2024-12-07 10:36:58.423312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:59.122 [2024-12-07 10:36:58.423323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:59.123 [2024-12-07 10:36:58.423332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.123 [2024-12-07 10:36:58.423383] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:59.123 [2024-12-07 10:36:58.423409] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:59.123 [2024-12-07 10:36:58.423442] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:59.123 [2024-12-07 10:36:58.423464] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:59.123 [2024-12-07 10:36:58.423559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:59.123 [2024-12-07 10:36:58.423572] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:59.123 [2024-12-07 10:36:58.423584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:59.123 [2024-12-07 10:36:58.423598] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:59.123 [2024-12-07 10:36:58.423609] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:59.123 [2024-12-07 10:36:58.423620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:59.123 [2024-12-07 10:36:58.423630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:59.123 [2024-12-07 10:36:58.423644] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:59.123 [2024-12-07 10:36:58.423654] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:59.123 [2024-12-07 10:36:58.423665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.123 [2024-12-07 10:36:58.423675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:59.123 [2024-12-07 10:36:58.423685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:59.123 [2024-12-07 10:36:58.423694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.123 [2024-12-07 10:36:58.423763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.123 [2024-12-07 10:36:58.423774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:59.123 [2024-12-07 10:36:58.423783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:59.123 [2024-12-07 10:36:58.423793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.123 [2024-12-07 10:36:58.423886] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:59.123 [2024-12-07 10:36:58.423901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:59.123 [2024-12-07 10:36:58.423911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.123 [2024-12-07 10:36:58.423921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.423931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:59.123 [2024-12-07 10:36:58.423940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.423950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:59.123 [2024-12-07 10:36:58.423960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:59.123 [2024-12-07 10:36:58.423969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:59.123 [2024-12-07 10:36:58.423978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.123 [2024-12-07 10:36:58.423988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:59.123 [2024-12-07 10:36:58.424019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:59.123 [2024-12-07 10:36:58.424028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.123 [2024-12-07 10:36:58.424061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:59.123 [2024-12-07 10:36:58.424070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:59.123 [2024-12-07 10:36:58.424079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:59.123 [2024-12-07 10:36:58.424098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:59.123 [2024-12-07 10:36:58.424124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:59.123 [2024-12-07 10:36:58.424151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:59.123 [2024-12-07 10:36:58.424178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:59.123 [2024-12-07 10:36:58.424206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:59.123 [2024-12-07 10:36:58.424243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.123 [2024-12-07 10:36:58.424261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:59.123 [2024-12-07 10:36:58.424270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:59.123 [2024-12-07 10:36:58.424278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.123 [2024-12-07 10:36:58.424287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:59.123 [2024-12-07 10:36:58.424296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:59.123 [2024-12-07 10:36:58.424305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:59.123 [2024-12-07 10:36:58.424322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:59.123 [2024-12-07 10:36:58.424333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424342] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:59.123 [2024-12-07 10:36:58.424352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:59.123 [2024-12-07 10:36:58.424361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.123 [2024-12-07 10:36:58.424380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:59.123 [2024-12-07 10:36:58.424389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:59.123 [2024-12-07 10:36:58.424398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:59.123 [2024-12-07 10:36:58.424408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:59.123 [2024-12-07 10:36:58.424417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:59.123 [2024-12-07 10:36:58.424426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:59.123 [2024-12-07 10:36:58.424436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:59.123 [2024-12-07 10:36:58.424449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:59.123 [2024-12-07 10:36:58.424476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:59.123 [2024-12-07 10:36:58.424486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:59.123 [2024-12-07 10:36:58.424496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:59.123 [2024-12-07 10:36:58.424506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:59.123 [2024-12-07 10:36:58.424516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:59.123 [2024-12-07 10:36:58.424526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:59.123 [2024-12-07 10:36:58.424536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:59.123 [2024-12-07 10:36:58.424546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:59.123 [2024-12-07 10:36:58.424557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:59.123 [2024-12-07 10:36:58.424607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:59.123 [2024-12-07 10:36:58.424617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:59.123 [2024-12-07 10:36:58.424639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:59.123 [2024-12-07 10:36:58.424649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:59.123 [2024-12-07 10:36:58.424659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:59.123 [2024-12-07 10:36:58.424669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.123 [2024-12-07 10:36:58.424680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:59.123 [2024-12-07 10:36:58.424689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:24:59.123 [2024-12-07 10:36:58.424699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.123 [2024-12-07 10:36:58.461776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.123 [2024-12-07 10:36:58.461812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:59.123 [2024-12-07 10:36:58.461825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.091 ms 00:24:59.123 [2024-12-07 10:36:58.461838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.123 [2024-12-07 10:36:58.461917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.123 [2024-12-07 10:36:58.461928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:59.123 [2024-12-07 10:36:58.461938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:59.123 [2024-12-07 10:36:58.461947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.534182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.534221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:59.382 [2024-12-07 10:36:58.534235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.266 ms 00:24:59.382 [2024-12-07 10:36:58.534262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.534305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.534317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:59.382 [2024-12-07 10:36:58.534332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:59.382 [2024-12-07 10:36:58.534342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.534869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.534885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:59.382 [2024-12-07 10:36:58.534896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:24:59.382 [2024-12-07 10:36:58.534906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.535043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.535058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:59.382 [2024-12-07 10:36:58.535075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:24:59.382 [2024-12-07 10:36:58.535085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.553496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.553533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:59.382 [2024-12-07 10:36:58.553546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.418 ms 00:24:59.382 [2024-12-07 10:36:58.553573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.571759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:59.382 [2024-12-07 10:36:58.571934] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:59.382 [2024-12-07 10:36:58.571955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.571966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:59.382 [2024-12-07 10:36:58.571991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.309 ms 00:24:59.382 [2024-12-07 10:36:58.572002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.600414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.600453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:59.382 [2024-12-07 10:36:58.600467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.413 ms 00:24:59.382 [2024-12-07 10:36:58.600494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.617724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.617758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:59.382 [2024-12-07 10:36:58.617770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.202 ms 00:24:59.382 [2024-12-07 10:36:58.617779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.634775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.634808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:59.382 [2024-12-07 10:36:58.634821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.986 ms 00:24:59.382 [2024-12-07 10:36:58.634830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.635554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.635585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:59.382 [2024-12-07 10:36:58.635601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:24:59.382 [2024-12-07 10:36:58.635611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.716216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.716275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:59.382 [2024-12-07 10:36:58.716297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.712 ms 00:24:59.382 [2024-12-07 10:36:58.716307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.726782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:59.382 [2024-12-07 10:36:58.729010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.729037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:59.382 [2024-12-07 10:36:58.729050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.677 ms 00:24:59.382 [2024-12-07 10:36:58.729060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.729131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.729144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:59.382 [2024-12-07 10:36:58.729159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:59.382 [2024-12-07 10:36:58.729168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.729257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.729269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:59.382 [2024-12-07 10:36:58.729280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:59.382 [2024-12-07 10:36:58.729290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.729314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.729325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:59.382 [2024-12-07 10:36:58.729335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:59.382 [2024-12-07 10:36:58.729345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.382 [2024-12-07 10:36:58.729381] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:59.382 [2024-12-07 10:36:58.729392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.382 [2024-12-07 10:36:58.729419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:59.382 [2024-12-07 10:36:58.729430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:59.382 [2024-12-07 10:36:58.729440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.641 [2024-12-07 10:36:58.764404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.641 [2024-12-07 10:36:58.764441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:59.641 [2024-12-07 10:36:58.764461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.999 ms 00:24:59.641 [2024-12-07 10:36:58.764471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.641 [2024-12-07 10:36:58.764538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.641 [2024-12-07 10:36:58.764550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:59.641 [2024-12-07 10:36:58.764560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:59.641 [2024-12-07 10:36:58.764570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.641 [2024-12-07 10:36:58.765660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.270 ms, result 0 00:25:01.021  [2024-12-07T10:37:01.311Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-07T10:37:02.246Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-07T10:37:03.186Z] Copying: 77/1024 [MB] (25 MBps) [2024-12-07T10:37:04.190Z] Copying: 103/1024 [MB] (25 MBps) [2024-12-07T10:37:05.139Z] Copying: 128/1024 [MB] (25 MBps) [2024-12-07T10:37:06.073Z] Copying: 153/1024 [MB] (25 MBps) [2024-12-07T10:37:07.009Z] Copying: 179/1024 [MB] (25 MBps) [2024-12-07T10:37:08.389Z] Copying: 204/1024 [MB] (25 MBps) [2024-12-07T10:37:08.959Z] Copying: 230/1024 [MB] (25 MBps) [2024-12-07T10:37:10.338Z] Copying: 256/1024 [MB] (25 MBps) [2024-12-07T10:37:11.275Z] Copying: 282/1024 [MB] (25 MBps) [2024-12-07T10:37:12.208Z] Copying: 308/1024 [MB] (26 MBps) [2024-12-07T10:37:13.145Z] Copying: 334/1024 [MB] (25 MBps) [2024-12-07T10:37:14.082Z] Copying: 359/1024 [MB] (25 MBps) [2024-12-07T10:37:15.019Z] Copying: 384/1024 [MB] (25 MBps) [2024-12-07T10:37:15.963Z] Copying: 410/1024 [MB] (25 MBps) [2024-12-07T10:37:17.339Z] Copying: 435/1024 [MB] (24 MBps) [2024-12-07T10:37:18.272Z] Copying: 461/1024 [MB] (25 MBps) [2024-12-07T10:37:19.206Z] Copying: 486/1024 [MB] (25 MBps) [2024-12-07T10:37:20.139Z] Copying: 512/1024 [MB] (25 MBps) [2024-12-07T10:37:21.075Z] Copying: 536/1024 [MB] (24 MBps) [2024-12-07T10:37:22.009Z] Copying: 562/1024 [MB] (25 MBps) [2024-12-07T10:37:22.945Z] Copying: 587/1024 [MB] (25 MBps) [2024-12-07T10:37:24.321Z] Copying: 613/1024 [MB] (25 MBps) [2024-12-07T10:37:25.257Z] Copying: 639/1024 [MB] (25 MBps) [2024-12-07T10:37:26.191Z] Copying: 666/1024 [MB] (26 MBps) [2024-12-07T10:37:27.127Z] Copying: 691/1024 [MB] (25 MBps) [2024-12-07T10:37:28.064Z] Copying: 717/1024 [MB] (25 MBps) [2024-12-07T10:37:29.002Z] Copying: 742/1024 [MB] (25 MBps) [2024-12-07T10:37:29.939Z] Copying: 768/1024 [MB] (25 MBps) [2024-12-07T10:37:31.326Z] Copying: 793/1024 [MB] (25 MBps) [2024-12-07T10:37:32.262Z] Copying: 818/1024 [MB] (25 MBps) [2024-12-07T10:37:33.197Z] Copying: 844/1024 [MB] (25 MBps) [2024-12-07T10:37:34.133Z] Copying: 870/1024 [MB] (26 MBps) [2024-12-07T10:37:35.068Z] Copying: 897/1024 [MB] (26 MBps) [2024-12-07T10:37:36.083Z] Copying: 923/1024 [MB] (26 MBps) [2024-12-07T10:37:37.045Z] Copying: 949/1024 [MB] (26 MBps) [2024-12-07T10:37:37.980Z] Copying: 976/1024 [MB] (26 MBps) [2024-12-07T10:37:38.935Z] Copying: 1002/1024 [MB] (26 MBps) [2024-12-07T10:37:39.195Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-07 10:37:39.188097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.842 [2024-12-07 10:37:39.189019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:39.842 [2024-12-07 10:37:39.189058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:39.842 [2024-12-07 10:37:39.189075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.842 [2024-12-07 10:37:39.189137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:39.842 [2024-12-07 10:37:39.193907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.842 [2024-12-07 10:37:39.194059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:39.842 [2024-12-07 10:37:39.194079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:25:39.842 [2024-12-07 10:37:39.194096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.842 [2024-12-07 10:37:39.194355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.842 [2024-12-07 10:37:39.194373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:39.842 [2024-12-07 10:37:39.194389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:25:39.842 [2024-12-07 10:37:39.194405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.197923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.197957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:40.103 [2024-12-07 10:37:39.197981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:25:40.103 [2024-12-07 10:37:39.198003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.204882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.204921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:40.103 [2024-12-07 10:37:39.204935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:25:40.103 [2024-12-07 10:37:39.204945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.243476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.243516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:40.103 [2024-12-07 10:37:39.243530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.496 ms 00:25:40.103 [2024-12-07 10:37:39.243540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.262787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.262823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:40.103 [2024-12-07 10:37:39.262837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.236 ms 00:25:40.103 [2024-12-07 10:37:39.262847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.262994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.263025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:40.103 [2024-12-07 10:37:39.263036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:40.103 [2024-12-07 10:37:39.263046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.297824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.297957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:40.103 [2024-12-07 10:37:39.298005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.817 ms 00:25:40.103 [2024-12-07 10:37:39.298016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.103 [2024-12-07 10:37:39.332199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.103 [2024-12-07 10:37:39.332231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:40.104 [2024-12-07 10:37:39.332244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.202 ms 00:25:40.104 [2024-12-07 10:37:39.332269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.104 [2024-12-07 10:37:39.365222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.104 [2024-12-07 10:37:39.365253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:40.104 [2024-12-07 10:37:39.365265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.970 ms 00:25:40.104 [2024-12-07 10:37:39.365275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.104 [2024-12-07 10:37:39.398906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.104 [2024-12-07 10:37:39.399060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:40.104 [2024-12-07 10:37:39.399081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.614 ms 00:25:40.104 [2024-12-07 10:37:39.399091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.104 [2024-12-07 10:37:39.399143] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:40.104 [2024-12-07 10:37:39.399168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.399973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.400007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.400018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:40.104 [2024-12-07 10:37:39.400029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:40.105 [2024-12-07 10:37:39.400268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:40.105 [2024-12-07 10:37:39.400278] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 812d4f29-38c0-44f5-af4a-828d2ebd97c9 00:25:40.105 [2024-12-07 10:37:39.400289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:40.105 [2024-12-07 10:37:39.400307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:40.105 [2024-12-07 10:37:39.400317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:40.105 [2024-12-07 10:37:39.400327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:40.105 [2024-12-07 10:37:39.400345] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:40.105 [2024-12-07 10:37:39.400356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:40.105 [2024-12-07 10:37:39.400366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:40.105 [2024-12-07 10:37:39.400375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:40.105 [2024-12-07 10:37:39.400384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:40.105 [2024-12-07 10:37:39.400394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.105 [2024-12-07 10:37:39.400404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:40.105 [2024-12-07 10:37:39.400415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:25:40.105 [2024-12-07 10:37:39.400428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.105 [2024-12-07 10:37:39.420805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.105 [2024-12-07 10:37:39.420839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:40.105 [2024-12-07 10:37:39.420851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.353 ms 00:25:40.105 [2024-12-07 10:37:39.420861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.105 [2024-12-07 10:37:39.421379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.105 [2024-12-07 10:37:39.421391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:40.105 [2024-12-07 10:37:39.421407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:25:40.105 [2024-12-07 10:37:39.421417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.472728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.472761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.366 [2024-12-07 10:37:39.472773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.472798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.472849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.472860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.366 [2024-12-07 10:37:39.472875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.472885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.472949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.472962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.366 [2024-12-07 10:37:39.472973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.472982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.473009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.473021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.366 [2024-12-07 10:37:39.473031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.473045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.590825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.590983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.366 [2024-12-07 10:37:39.591072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.591109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.687638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.687775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.366 [2024-12-07 10:37:39.687870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.687905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.688026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.688066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.366 [2024-12-07 10:37:39.688097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.688139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.688200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.688344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.366 [2024-12-07 10:37:39.688422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.688451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.688585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.688680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.366 [2024-12-07 10:37:39.688717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.688748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.688818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.688893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:40.366 [2024-12-07 10:37:39.688909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.688919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.688968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.688997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.366 [2024-12-07 10:37:39.689008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.689018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.689063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.366 [2024-12-07 10:37:39.689075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.366 [2024-12-07 10:37:39.689086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.366 [2024-12-07 10:37:39.689096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.366 [2024-12-07 10:37:39.689216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.910 ms, result 0 00:25:41.751 00:25:41.751 00:25:41.751 10:37:40 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.132 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:43.132 10:37:42 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:43.132 [2024-12-07 10:37:42.442493] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:43.132 [2024-12-07 10:37:42.442609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80201 ] 00:25:43.392 [2024-12-07 10:37:42.622180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.392 [2024-12-07 10:37:42.728708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.963 [2024-12-07 10:37:43.076390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.963 [2024-12-07 10:37:43.076456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.963 [2024-12-07 10:37:43.235723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.235772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.963 [2024-12-07 10:37:43.235788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:43.963 [2024-12-07 10:37:43.235798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.235844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.235859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.963 [2024-12-07 10:37:43.235868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:43.963 [2024-12-07 10:37:43.235878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.235897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.963 [2024-12-07 10:37:43.236897] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.963 [2024-12-07 10:37:43.236932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.236944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.963 [2024-12-07 10:37:43.236956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:25:43.963 [2024-12-07 10:37:43.236965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.238413] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.963 [2024-12-07 10:37:43.274088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.274152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.963 [2024-12-07 10:37:43.274174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.729 ms 00:25:43.963 [2024-12-07 10:37:43.274188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.274305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.274323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.963 [2024-12-07 10:37:43.274338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:43.963 [2024-12-07 10:37:43.274352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.287439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.287479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.963 [2024-12-07 10:37:43.287496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.003 ms 00:25:43.963 [2024-12-07 10:37:43.287513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.287613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.287629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.963 [2024-12-07 10:37:43.287642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:43.963 [2024-12-07 10:37:43.287655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.287728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.287742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.963 [2024-12-07 10:37:43.287754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:43.963 [2024-12-07 10:37:43.287766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.287803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.963 [2024-12-07 10:37:43.293529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.293568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.963 [2024-12-07 10:37:43.293587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.743 ms 00:25:43.963 [2024-12-07 10:37:43.293600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.293644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.963 [2024-12-07 10:37:43.293657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.963 [2024-12-07 10:37:43.293670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:43.963 [2024-12-07 10:37:43.293682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.963 [2024-12-07 10:37:43.293729] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.963 [2024-12-07 10:37:43.293762] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.963 [2024-12-07 10:37:43.293802] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.963 [2024-12-07 10:37:43.293837] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.963 [2024-12-07 10:37:43.293932] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.963 [2024-12-07 10:37:43.293947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.963 [2024-12-07 10:37:43.293962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.963 [2024-12-07 10:37:43.293995] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.963 [2024-12-07 10:37:43.294010] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.963 [2024-12-07 10:37:43.294023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:43.963 [2024-12-07 10:37:43.294035] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.964 [2024-12-07 10:37:43.294051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.964 [2024-12-07 10:37:43.294063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.964 [2024-12-07 10:37:43.294076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.964 [2024-12-07 10:37:43.294088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.964 [2024-12-07 10:37:43.294100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:25:43.964 [2024-12-07 10:37:43.294112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.964 [2024-12-07 10:37:43.294183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.964 [2024-12-07 10:37:43.294198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.964 [2024-12-07 10:37:43.294210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:43.964 [2024-12-07 10:37:43.294221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.964 [2024-12-07 10:37:43.294326] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.964 [2024-12-07 10:37:43.294345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.964 [2024-12-07 10:37:43.294358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.964 [2024-12-07 10:37:43.294393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.964 [2024-12-07 10:37:43.294428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.964 [2024-12-07 10:37:43.294450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.964 [2024-12-07 10:37:43.294462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:43.964 [2024-12-07 10:37:43.294474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.964 [2024-12-07 10:37:43.294497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.964 [2024-12-07 10:37:43.294509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:43.964 [2024-12-07 10:37:43.294520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.964 [2024-12-07 10:37:43.294543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.964 [2024-12-07 10:37:43.294574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.964 [2024-12-07 10:37:43.294617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.964 [2024-12-07 10:37:43.294648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.964 [2024-12-07 10:37:43.294681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.964 [2024-12-07 10:37:43.294711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.964 [2024-12-07 10:37:43.294732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.964 [2024-12-07 10:37:43.294743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:43.964 [2024-12-07 10:37:43.294753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.964 [2024-12-07 10:37:43.294763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.964 [2024-12-07 10:37:43.294773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:43.964 [2024-12-07 10:37:43.294784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.964 [2024-12-07 10:37:43.294804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:43.964 [2024-12-07 10:37:43.294816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.964 [2024-12-07 10:37:43.294840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.964 [2024-12-07 10:37:43.294852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.964 [2024-12-07 10:37:43.294874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.964 [2024-12-07 10:37:43.294885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.964 [2024-12-07 10:37:43.294896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.964 [2024-12-07 10:37:43.294906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.964 [2024-12-07 10:37:43.294916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.964 [2024-12-07 10:37:43.294927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.964 [2024-12-07 10:37:43.294939] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.964 [2024-12-07 10:37:43.294954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.294990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:43.964 [2024-12-07 10:37:43.295004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:43.964 [2024-12-07 10:37:43.295016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:43.964 [2024-12-07 10:37:43.295028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:43.964 [2024-12-07 10:37:43.295041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:43.964 [2024-12-07 10:37:43.295052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:43.964 [2024-12-07 10:37:43.295064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:43.964 [2024-12-07 10:37:43.295076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:43.964 [2024-12-07 10:37:43.295087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:43.964 [2024-12-07 10:37:43.295121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.295134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.295146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.295158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.295170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:43.964 [2024-12-07 10:37:43.295181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.964 [2024-12-07 10:37:43.295194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.295208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.964 [2024-12-07 10:37:43.295220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.964 [2024-12-07 10:37:43.295232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.964 [2024-12-07 10:37:43.295244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.964 [2024-12-07 10:37:43.295256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.964 [2024-12-07 10:37:43.295269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.964 [2024-12-07 10:37:43.295280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:25:43.964 [2024-12-07 10:37:43.295308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.342715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.343016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.225 [2024-12-07 10:37:43.343045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.425 ms 00:25:44.225 [2024-12-07 10:37:43.343069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.343158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.343172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:44.225 [2024-12-07 10:37:43.343186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:44.225 [2024-12-07 10:37:43.343198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.426759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.426802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.225 [2024-12-07 10:37:43.426820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.601 ms 00:25:44.225 [2024-12-07 10:37:43.426832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.426879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.426899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.225 [2024-12-07 10:37:43.426912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.225 [2024-12-07 10:37:43.426925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.427766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.427791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.225 [2024-12-07 10:37:43.427805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:25:44.225 [2024-12-07 10:37:43.427817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.427956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.427971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.225 [2024-12-07 10:37:43.428011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:25:44.225 [2024-12-07 10:37:43.428023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.451857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.451899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.225 [2024-12-07 10:37:43.451915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.845 ms 00:25:44.225 [2024-12-07 10:37:43.451927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.472189] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:44.225 [2024-12-07 10:37:43.472234] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:44.225 [2024-12-07 10:37:43.472252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.472264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:44.225 [2024-12-07 10:37:43.472278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.198 ms 00:25:44.225 [2024-12-07 10:37:43.472289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.501445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.501490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:44.225 [2024-12-07 10:37:43.501506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.152 ms 00:25:44.225 [2024-12-07 10:37:43.501518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.519311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.519354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:44.225 [2024-12-07 10:37:43.519369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.752 ms 00:25:44.225 [2024-12-07 10:37:43.519382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.536565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.536814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:44.225 [2024-12-07 10:37:43.536837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.166 ms 00:25:44.225 [2024-12-07 10:37:43.536850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.225 [2024-12-07 10:37:43.537683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.225 [2024-12-07 10:37:43.537716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:44.225 [2024-12-07 10:37:43.537737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:25:44.225 [2024-12-07 10:37:43.537749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.631845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.631905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:44.486 [2024-12-07 10:37:43.631932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.222 ms 00:25:44.486 [2024-12-07 10:37:43.631944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.642051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:44.486 [2024-12-07 10:37:43.645185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.645391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:44.486 [2024-12-07 10:37:43.645413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.199 ms 00:25:44.486 [2024-12-07 10:37:43.645426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.645516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.645531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:44.486 [2024-12-07 10:37:43.645550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:44.486 [2024-12-07 10:37:43.645563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.645654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.645669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:44.486 [2024-12-07 10:37:43.645681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:44.486 [2024-12-07 10:37:43.645693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.645724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.645738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:44.486 [2024-12-07 10:37:43.645750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:44.486 [2024-12-07 10:37:43.645761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.645812] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:44.486 [2024-12-07 10:37:43.645827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.645839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:44.486 [2024-12-07 10:37:43.645852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:44.486 [2024-12-07 10:37:43.645866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.680730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.680890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:44.486 [2024-12-07 10:37:43.680921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.894 ms 00:25:44.486 [2024-12-07 10:37:43.680934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.681026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-12-07 10:37:43.681041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:44.486 [2024-12-07 10:37:43.681054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:44.486 [2024-12-07 10:37:43.681067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-12-07 10:37:43.682512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.979 ms, result 0 00:25:45.426  [2024-12-07T10:37:45.715Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-07T10:37:47.092Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-07T10:37:48.029Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-07T10:37:48.966Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-07T10:37:49.905Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-07T10:37:50.843Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-07T10:37:51.784Z] Copying: 164/1024 [MB] (23 MBps) [2024-12-07T10:37:52.723Z] Copying: 187/1024 [MB] (23 MBps) [2024-12-07T10:37:54.104Z] Copying: 211/1024 [MB] (23 MBps) [2024-12-07T10:37:55.041Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-07T10:37:55.979Z] Copying: 257/1024 [MB] (23 MBps) [2024-12-07T10:37:56.915Z] Copying: 281/1024 [MB] (23 MBps) [2024-12-07T10:37:57.849Z] Copying: 304/1024 [MB] (23 MBps) [2024-12-07T10:37:58.785Z] Copying: 328/1024 [MB] (23 MBps) [2024-12-07T10:37:59.721Z] Copying: 351/1024 [MB] (23 MBps) [2024-12-07T10:38:00.676Z] Copying: 375/1024 [MB] (23 MBps) [2024-12-07T10:38:02.053Z] Copying: 398/1024 [MB] (23 MBps) [2024-12-07T10:38:02.993Z] Copying: 422/1024 [MB] (23 MBps) [2024-12-07T10:38:03.930Z] Copying: 445/1024 [MB] (23 MBps) [2024-12-07T10:38:04.867Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-07T10:38:05.801Z] Copying: 492/1024 [MB] (23 MBps) [2024-12-07T10:38:06.825Z] Copying: 515/1024 [MB] (23 MBps) [2024-12-07T10:38:07.789Z] Copying: 538/1024 [MB] (23 MBps) [2024-12-07T10:38:08.727Z] Copying: 561/1024 [MB] (23 MBps) [2024-12-07T10:38:09.666Z] Copying: 585/1024 [MB] (23 MBps) [2024-12-07T10:38:11.046Z] Copying: 608/1024 [MB] (23 MBps) [2024-12-07T10:38:11.987Z] Copying: 631/1024 [MB] (23 MBps) [2024-12-07T10:38:12.927Z] Copying: 654/1024 [MB] (23 MBps) [2024-12-07T10:38:13.867Z] Copying: 678/1024 [MB] (23 MBps) [2024-12-07T10:38:14.804Z] Copying: 701/1024 [MB] (23 MBps) [2024-12-07T10:38:15.738Z] Copying: 725/1024 [MB] (23 MBps) [2024-12-07T10:38:16.673Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-07T10:38:18.047Z] Copying: 772/1024 [MB] (23 MBps) [2024-12-07T10:38:18.999Z] Copying: 796/1024 [MB] (23 MBps) [2024-12-07T10:38:19.941Z] Copying: 819/1024 [MB] (23 MBps) [2024-12-07T10:38:20.878Z] Copying: 842/1024 [MB] (23 MBps) [2024-12-07T10:38:21.816Z] Copying: 866/1024 [MB] (23 MBps) [2024-12-07T10:38:22.762Z] Copying: 889/1024 [MB] (23 MBps) [2024-12-07T10:38:23.700Z] Copying: 912/1024 [MB] (23 MBps) [2024-12-07T10:38:24.639Z] Copying: 935/1024 [MB] (23 MBps) [2024-12-07T10:38:26.018Z] Copying: 958/1024 [MB] (22 MBps) [2024-12-07T10:38:26.956Z] Copying: 981/1024 [MB] (22 MBps) [2024-12-07T10:38:27.895Z] Copying: 1004/1024 [MB] (23 MBps) [2024-12-07T10:38:28.155Z] Copying: 1023/1024 [MB] (18 MBps) [2024-12-07T10:38:28.155Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-07 10:38:27.991773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:27.991862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.802 [2024-12-07 10:38:27.991892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.802 [2024-12-07 10:38:27.991904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.802 [2024-12-07 10:38:27.993620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.802 [2024-12-07 10:38:28.001105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:28.001147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.802 [2024-12-07 10:38:28.001163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.449 ms 00:26:28.802 [2024-12-07 10:38:28.001175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.802 [2024-12-07 10:38:28.011117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:28.011160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.802 [2024-12-07 10:38:28.011176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.428 ms 00:26:28.802 [2024-12-07 10:38:28.011196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.802 [2024-12-07 10:38:28.033887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:28.033933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.802 [2024-12-07 10:38:28.033948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.707 ms 00:26:28.802 [2024-12-07 10:38:28.033960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.802 [2024-12-07 10:38:28.038586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:28.038631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.802 [2024-12-07 10:38:28.038647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:26:28.802 [2024-12-07 10:38:28.038667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.802 [2024-12-07 10:38:28.075013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:28.075055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.802 [2024-12-07 10:38:28.075071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.342 ms 00:26:28.802 [2024-12-07 10:38:28.075082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.802 [2024-12-07 10:38:28.096216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.802 [2024-12-07 10:38:28.096428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.802 [2024-12-07 10:38:28.096453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.125 ms 00:26:28.802 [2024-12-07 10:38:28.096465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.063 [2024-12-07 10:38:28.190345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.063 [2024-12-07 10:38:28.190391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:29.063 [2024-12-07 10:38:28.190406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.987 ms 00:26:29.063 [2024-12-07 10:38:28.190419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.063 [2024-12-07 10:38:28.225734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.063 [2024-12-07 10:38:28.225775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:29.063 [2024-12-07 10:38:28.225789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.352 ms 00:26:29.063 [2024-12-07 10:38:28.225800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.063 [2024-12-07 10:38:28.260149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.063 [2024-12-07 10:38:28.260196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:29.064 [2024-12-07 10:38:28.260211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.363 ms 00:26:29.064 [2024-12-07 10:38:28.260222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.064 [2024-12-07 10:38:28.293861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.064 [2024-12-07 10:38:28.294037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.064 [2024-12-07 10:38:28.294059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.652 ms 00:26:29.064 [2024-12-07 10:38:28.294070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.064 [2024-12-07 10:38:28.328192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.064 [2024-12-07 10:38:28.328233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.064 [2024-12-07 10:38:28.328247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.099 ms 00:26:29.064 [2024-12-07 10:38:28.328258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.064 [2024-12-07 10:38:28.328298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.064 [2024-12-07 10:38:28.328316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 83200 / 261120 wr_cnt: 1 state: open 00:26:29.064 [2024-12-07 10:38:28.328330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.328972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.064 [2024-12-07 10:38:28.329298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.065 [2024-12-07 10:38:28.329544] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.065 [2024-12-07 10:38:28.329554] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 812d4f29-38c0-44f5-af4a-828d2ebd97c9 00:26:29.065 [2024-12-07 10:38:28.329569] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 83200 00:26:29.065 [2024-12-07 10:38:28.329580] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 84160 00:26:29.065 [2024-12-07 10:38:28.329590] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 83200 00:26:29.065 [2024-12-07 10:38:28.329602] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0115 00:26:29.065 [2024-12-07 10:38:28.329631] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.065 [2024-12-07 10:38:28.329642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.065 [2024-12-07 10:38:28.329653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.065 [2024-12-07 10:38:28.329663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.065 [2024-12-07 10:38:28.329672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.065 [2024-12-07 10:38:28.329683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.065 [2024-12-07 10:38:28.329696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.065 [2024-12-07 10:38:28.329708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:26:29.065 [2024-12-07 10:38:28.329718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.065 [2024-12-07 10:38:28.349497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.065 [2024-12-07 10:38:28.349535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.065 [2024-12-07 10:38:28.349558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.773 ms 00:26:29.065 [2024-12-07 10:38:28.349570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.065 [2024-12-07 10:38:28.350155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.065 [2024-12-07 10:38:28.350170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.065 [2024-12-07 10:38:28.350182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:26:29.065 [2024-12-07 10:38:28.350194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.065 [2024-12-07 10:38:28.401214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.065 [2024-12-07 10:38:28.401253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.065 [2024-12-07 10:38:28.401267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.065 [2024-12-07 10:38:28.401279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.065 [2024-12-07 10:38:28.401364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.065 [2024-12-07 10:38:28.401380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.065 [2024-12-07 10:38:28.401393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.065 [2024-12-07 10:38:28.401405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.065 [2024-12-07 10:38:28.401476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.065 [2024-12-07 10:38:28.401497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.065 [2024-12-07 10:38:28.401509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.065 [2024-12-07 10:38:28.401520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.065 [2024-12-07 10:38:28.401538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.065 [2024-12-07 10:38:28.401550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.065 [2024-12-07 10:38:28.401562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.065 [2024-12-07 10:38:28.401573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.527881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.527947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.325 [2024-12-07 10:38:28.527966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.527990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.628416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.628482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.325 [2024-12-07 10:38:28.628500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.628514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.628630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.628645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.325 [2024-12-07 10:38:28.628659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.628679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.628737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.628751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.325 [2024-12-07 10:38:28.628763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.628774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.628908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.628924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.325 [2024-12-07 10:38:28.628938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.628955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.629027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.629042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:29.325 [2024-12-07 10:38:28.629054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.629067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.629124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.629138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.325 [2024-12-07 10:38:28.629151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.629162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.629228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.325 [2024-12-07 10:38:28.629243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.325 [2024-12-07 10:38:28.629255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.325 [2024-12-07 10:38:28.629268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.325 [2024-12-07 10:38:28.629436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.374 ms, result 0 00:26:31.234 00:26:31.234 00:26:31.234 10:38:30 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:31.234 [2024-12-07 10:38:30.397406] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:26:31.234 [2024-12-07 10:38:30.397523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80686 ] 00:26:31.234 [2024-12-07 10:38:30.579640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.494 [2024-12-07 10:38:30.707223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.064 [2024-12-07 10:38:31.122031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:32.064 [2024-12-07 10:38:31.122118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:32.064 [2024-12-07 10:38:31.288838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.289106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:32.064 [2024-12-07 10:38:31.289136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:32.064 [2024-12-07 10:38:31.289150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.289228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.289248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.064 [2024-12-07 10:38:31.289262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:32.064 [2024-12-07 10:38:31.289275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.289302] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:32.064 [2024-12-07 10:38:31.290324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:32.064 [2024-12-07 10:38:31.290357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.290369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.064 [2024-12-07 10:38:31.290382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:26:32.064 [2024-12-07 10:38:31.290394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.292961] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:32.064 [2024-12-07 10:38:31.311617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.311665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:32.064 [2024-12-07 10:38:31.311684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.687 ms 00:26:32.064 [2024-12-07 10:38:31.311697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.311792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.311807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:32.064 [2024-12-07 10:38:31.311821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:32.064 [2024-12-07 10:38:31.311834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.324391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.324425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.064 [2024-12-07 10:38:31.324440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:26:32.064 [2024-12-07 10:38:31.324457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.324549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.324565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.064 [2024-12-07 10:38:31.324579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:32.064 [2024-12-07 10:38:31.324591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.324652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.324667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:32.064 [2024-12-07 10:38:31.324679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:32.064 [2024-12-07 10:38:31.324691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.324725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:32.064 [2024-12-07 10:38:31.330292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.330331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.064 [2024-12-07 10:38:31.330349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:26:32.064 [2024-12-07 10:38:31.330362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.330402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.330416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:32.064 [2024-12-07 10:38:31.330428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:32.064 [2024-12-07 10:38:31.330440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.330482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:32.064 [2024-12-07 10:38:31.330514] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:32.064 [2024-12-07 10:38:31.330553] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:32.064 [2024-12-07 10:38:31.330579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:32.064 [2024-12-07 10:38:31.330681] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:32.064 [2024-12-07 10:38:31.330697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:32.064 [2024-12-07 10:38:31.330713] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:32.064 [2024-12-07 10:38:31.330727] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:32.064 [2024-12-07 10:38:31.330741] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:32.064 [2024-12-07 10:38:31.330754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:32.064 [2024-12-07 10:38:31.330768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:32.064 [2024-12-07 10:38:31.330786] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:32.064 [2024-12-07 10:38:31.330798] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:32.064 [2024-12-07 10:38:31.330810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.330823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:32.064 [2024-12-07 10:38:31.330835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:26:32.064 [2024-12-07 10:38:31.330847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.330917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.064 [2024-12-07 10:38:31.330931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:32.064 [2024-12-07 10:38:31.330942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:32.064 [2024-12-07 10:38:31.330954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.064 [2024-12-07 10:38:31.331077] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:32.064 [2024-12-07 10:38:31.331096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:32.064 [2024-12-07 10:38:31.331108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:32.064 [2024-12-07 10:38:31.331120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:32.064 [2024-12-07 10:38:31.331144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:32.064 [2024-12-07 10:38:31.331168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:32.064 [2024-12-07 10:38:31.331180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:32.064 [2024-12-07 10:38:31.331201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:32.064 [2024-12-07 10:38:31.331215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:32.064 [2024-12-07 10:38:31.331227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:32.064 [2024-12-07 10:38:31.331249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:32.064 [2024-12-07 10:38:31.331260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:32.064 [2024-12-07 10:38:31.331271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:32.064 [2024-12-07 10:38:31.331293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:32.064 [2024-12-07 10:38:31.331304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:32.064 [2024-12-07 10:38:31.331325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.064 [2024-12-07 10:38:31.331348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:32.064 [2024-12-07 10:38:31.331358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:32.064 [2024-12-07 10:38:31.331368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.065 [2024-12-07 10:38:31.331378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:32.065 [2024-12-07 10:38:31.331388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:32.065 [2024-12-07 10:38:31.331398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.065 [2024-12-07 10:38:31.331408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:32.065 [2024-12-07 10:38:31.331418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:32.065 [2024-12-07 10:38:31.331428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.065 [2024-12-07 10:38:31.331439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:32.065 [2024-12-07 10:38:31.331450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:32.065 [2024-12-07 10:38:31.331460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:32.065 [2024-12-07 10:38:31.331470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:32.065 [2024-12-07 10:38:31.331480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:32.065 [2024-12-07 10:38:31.331490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:32.065 [2024-12-07 10:38:31.331500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:32.065 [2024-12-07 10:38:31.331511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:32.065 [2024-12-07 10:38:31.331521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.065 [2024-12-07 10:38:31.331532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:32.065 [2024-12-07 10:38:31.331542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:32.065 [2024-12-07 10:38:31.331553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.065 [2024-12-07 10:38:31.331564] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:32.065 [2024-12-07 10:38:31.331577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:32.065 [2024-12-07 10:38:31.331588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:32.065 [2024-12-07 10:38:31.331599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.065 [2024-12-07 10:38:31.331610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:32.065 [2024-12-07 10:38:31.331621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:32.065 [2024-12-07 10:38:31.331633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:32.065 [2024-12-07 10:38:31.331643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:32.065 [2024-12-07 10:38:31.331654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:32.065 [2024-12-07 10:38:31.331664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:32.065 [2024-12-07 10:38:31.331676] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:32.065 [2024-12-07 10:38:31.331690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:32.065 [2024-12-07 10:38:31.331721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:32.065 [2024-12-07 10:38:31.331733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:32.065 [2024-12-07 10:38:31.331745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:32.065 [2024-12-07 10:38:31.331758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:32.065 [2024-12-07 10:38:31.331769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:32.065 [2024-12-07 10:38:31.331780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:32.065 [2024-12-07 10:38:31.331791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:32.065 [2024-12-07 10:38:31.331803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:32.065 [2024-12-07 10:38:31.331815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:32.065 [2024-12-07 10:38:31.331874] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:32.065 [2024-12-07 10:38:31.331887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:32.065 [2024-12-07 10:38:31.331911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:32.065 [2024-12-07 10:38:31.331923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:32.065 [2024-12-07 10:38:31.331934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:32.065 [2024-12-07 10:38:31.331947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.065 [2024-12-07 10:38:31.331959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:32.065 [2024-12-07 10:38:31.331971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:26:32.065 [2024-12-07 10:38:31.332352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.065 [2024-12-07 10:38:31.380075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.065 [2024-12-07 10:38:31.380293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.065 [2024-12-07 10:38:31.380393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.697 ms 00:26:32.065 [2024-12-07 10:38:31.380446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.065 [2024-12-07 10:38:31.380554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.065 [2024-12-07 10:38:31.380659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:32.065 [2024-12-07 10:38:31.380700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:32.065 [2024-12-07 10:38:31.380736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.453899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.454115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.323 [2024-12-07 10:38:31.454213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.133 ms 00:26:32.323 [2024-12-07 10:38:31.454256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.454351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.454401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:32.323 [2024-12-07 10:38:31.454438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:32.323 [2024-12-07 10:38:31.454546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.455448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.455597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:32.323 [2024-12-07 10:38:31.455689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:26:32.323 [2024-12-07 10:38:31.455730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.455904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.455947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:32.323 [2024-12-07 10:38:31.456060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:26:32.323 [2024-12-07 10:38:31.456104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.477001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.477155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:32.323 [2024-12-07 10:38:31.477310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.875 ms 00:26:32.323 [2024-12-07 10:38:31.477329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.496539] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:32.323 [2024-12-07 10:38:31.496583] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:32.323 [2024-12-07 10:38:31.496601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.496614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:32.323 [2024-12-07 10:38:31.496627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.160 ms 00:26:32.323 [2024-12-07 10:38:31.496638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.323 [2024-12-07 10:38:31.525867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.323 [2024-12-07 10:38:31.525910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:32.323 [2024-12-07 10:38:31.525926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.226 ms 00:26:32.323 [2024-12-07 10:38:31.525938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.542761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.542802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:32.324 [2024-12-07 10:38:31.542817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.789 ms 00:26:32.324 [2024-12-07 10:38:31.542829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.559544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.559586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:32.324 [2024-12-07 10:38:31.559600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.699 ms 00:26:32.324 [2024-12-07 10:38:31.559612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.560348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.560378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:32.324 [2024-12-07 10:38:31.560397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:26:32.324 [2024-12-07 10:38:31.560409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.652633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.652861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:32.324 [2024-12-07 10:38:31.652896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.344 ms 00:26:32.324 [2024-12-07 10:38:31.652910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.663161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:32.324 [2024-12-07 10:38:31.666436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.666472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:32.324 [2024-12-07 10:38:31.666487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.482 ms 00:26:32.324 [2024-12-07 10:38:31.666500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.666581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.666596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:32.324 [2024-12-07 10:38:31.666615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:32.324 [2024-12-07 10:38:31.666638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.668580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.668627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:32.324 [2024-12-07 10:38:31.668642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:26:32.324 [2024-12-07 10:38:31.668653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.668696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.668709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:32.324 [2024-12-07 10:38:31.668722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:32.324 [2024-12-07 10:38:31.668734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.324 [2024-12-07 10:38:31.668790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:32.324 [2024-12-07 10:38:31.668806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.324 [2024-12-07 10:38:31.668818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:32.324 [2024-12-07 10:38:31.668831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:32.324 [2024-12-07 10:38:31.668843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.581 [2024-12-07 10:38:31.704560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.582 [2024-12-07 10:38:31.704607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:32.582 [2024-12-07 10:38:31.704632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.747 ms 00:26:32.582 [2024-12-07 10:38:31.704644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.582 [2024-12-07 10:38:31.704726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.582 [2024-12-07 10:38:31.704741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:32.582 [2024-12-07 10:38:31.704755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:32.582 [2024-12-07 10:38:31.704767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.582 [2024-12-07 10:38:31.706312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.583 ms, result 0 00:26:33.960  [2024-12-07T10:38:34.248Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-07T10:38:35.186Z] Copying: 39/1024 [MB] (23 MBps) [2024-12-07T10:38:36.119Z] Copying: 63/1024 [MB] (24 MBps) [2024-12-07T10:38:37.055Z] Copying: 88/1024 [MB] (24 MBps) [2024-12-07T10:38:38.045Z] Copying: 113/1024 [MB] (25 MBps) [2024-12-07T10:38:39.025Z] Copying: 139/1024 [MB] (26 MBps) [2024-12-07T10:38:39.963Z] Copying: 165/1024 [MB] (26 MBps) [2024-12-07T10:38:40.911Z] Copying: 192/1024 [MB] (26 MBps) [2024-12-07T10:38:42.288Z] Copying: 218/1024 [MB] (26 MBps) [2024-12-07T10:38:43.223Z] Copying: 243/1024 [MB] (25 MBps) [2024-12-07T10:38:44.160Z] Copying: 270/1024 [MB] (26 MBps) [2024-12-07T10:38:45.096Z] Copying: 294/1024 [MB] (24 MBps) [2024-12-07T10:38:46.030Z] Copying: 320/1024 [MB] (25 MBps) [2024-12-07T10:38:46.966Z] Copying: 346/1024 [MB] (25 MBps) [2024-12-07T10:38:47.903Z] Copying: 371/1024 [MB] (25 MBps) [2024-12-07T10:38:49.282Z] Copying: 396/1024 [MB] (24 MBps) [2024-12-07T10:38:50.216Z] Copying: 420/1024 [MB] (24 MBps) [2024-12-07T10:38:51.153Z] Copying: 445/1024 [MB] (24 MBps) [2024-12-07T10:38:52.101Z] Copying: 471/1024 [MB] (25 MBps) [2024-12-07T10:38:53.038Z] Copying: 497/1024 [MB] (25 MBps) [2024-12-07T10:38:53.975Z] Copying: 524/1024 [MB] (27 MBps) [2024-12-07T10:38:54.909Z] Copying: 551/1024 [MB] (26 MBps) [2024-12-07T10:38:56.284Z] Copying: 577/1024 [MB] (25 MBps) [2024-12-07T10:38:57.219Z] Copying: 602/1024 [MB] (25 MBps) [2024-12-07T10:38:58.155Z] Copying: 628/1024 [MB] (25 MBps) [2024-12-07T10:38:59.090Z] Copying: 653/1024 [MB] (25 MBps) [2024-12-07T10:39:00.027Z] Copying: 678/1024 [MB] (24 MBps) [2024-12-07T10:39:00.983Z] Copying: 703/1024 [MB] (24 MBps) [2024-12-07T10:39:01.922Z] Copying: 728/1024 [MB] (25 MBps) [2024-12-07T10:39:03.301Z] Copying: 754/1024 [MB] (25 MBps) [2024-12-07T10:39:04.239Z] Copying: 779/1024 [MB] (25 MBps) [2024-12-07T10:39:05.180Z] Copying: 805/1024 [MB] (26 MBps) [2024-12-07T10:39:06.119Z] Copying: 831/1024 [MB] (26 MBps) [2024-12-07T10:39:07.052Z] Copying: 857/1024 [MB] (25 MBps) [2024-12-07T10:39:07.987Z] Copying: 882/1024 [MB] (25 MBps) [2024-12-07T10:39:08.924Z] Copying: 908/1024 [MB] (25 MBps) [2024-12-07T10:39:09.942Z] Copying: 934/1024 [MB] (25 MBps) [2024-12-07T10:39:10.881Z] Copying: 960/1024 [MB] (25 MBps) [2024-12-07T10:39:12.263Z] Copying: 986/1024 [MB] (25 MBps) [2024-12-07T10:39:12.522Z] Copying: 1011/1024 [MB] (25 MBps) [2024-12-07T10:39:12.522Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-07 10:39:12.505456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.169 [2024-12-07 10:39:12.505744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:13.169 [2024-12-07 10:39:12.505797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:13.169 [2024-12-07 10:39:12.505818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.169 [2024-12-07 10:39:12.505879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:13.169 [2024-12-07 10:39:12.511436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.169 [2024-12-07 10:39:12.511486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:13.169 [2024-12-07 10:39:12.511502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.531 ms 00:27:13.169 [2024-12-07 10:39:12.511514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.169 [2024-12-07 10:39:12.511760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.169 [2024-12-07 10:39:12.511775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:13.169 [2024-12-07 10:39:12.511789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:27:13.169 [2024-12-07 10:39:12.511808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.169 [2024-12-07 10:39:12.519142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.169 [2024-12-07 10:39:12.519192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:13.169 [2024-12-07 10:39:12.519220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.325 ms 00:27:13.169 [2024-12-07 10:39:12.519232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.429 [2024-12-07 10:39:12.524336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.429 [2024-12-07 10:39:12.524506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:13.429 [2024-12-07 10:39:12.524528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.067 ms 00:27:13.429 [2024-12-07 10:39:12.524548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.429 [2024-12-07 10:39:12.560549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.429 [2024-12-07 10:39:12.560588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:13.429 [2024-12-07 10:39:12.560602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.007 ms 00:27:13.429 [2024-12-07 10:39:12.560611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.429 [2024-12-07 10:39:12.581353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.429 [2024-12-07 10:39:12.581406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:13.429 [2024-12-07 10:39:12.581420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.736 ms 00:27:13.429 [2024-12-07 10:39:12.581430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.429 [2024-12-07 10:39:12.717997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.429 [2024-12-07 10:39:12.718047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:13.429 [2024-12-07 10:39:12.718062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 136.735 ms 00:27:13.429 [2024-12-07 10:39:12.718073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.429 [2024-12-07 10:39:12.753246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.429 [2024-12-07 10:39:12.753281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:13.429 [2024-12-07 10:39:12.753294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.213 ms 00:27:13.429 [2024-12-07 10:39:12.753303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.691 [2024-12-07 10:39:12.788216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.691 [2024-12-07 10:39:12.788260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:13.691 [2024-12-07 10:39:12.788272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.932 ms 00:27:13.691 [2024-12-07 10:39:12.788282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.691 [2024-12-07 10:39:12.823395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.691 [2024-12-07 10:39:12.823540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:13.691 [2024-12-07 10:39:12.823561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.134 ms 00:27:13.691 [2024-12-07 10:39:12.823572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.691 [2024-12-07 10:39:12.858801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.691 [2024-12-07 10:39:12.858946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:13.691 [2024-12-07 10:39:12.858967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.169 ms 00:27:13.691 [2024-12-07 10:39:12.858990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.691 [2024-12-07 10:39:12.859043] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:13.691 [2024-12-07 10:39:12.859058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:13.691 [2024-12-07 10:39:12.859072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:13.691 [2024-12-07 10:39:12.859691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.859992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:13.692 [2024-12-07 10:39:12.860149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:13.692 [2024-12-07 10:39:12.860159] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 812d4f29-38c0-44f5-af4a-828d2ebd97c9 00:27:13.692 [2024-12-07 10:39:12.860171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:13.692 [2024-12-07 10:39:12.860180] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 48832 00:27:13.692 [2024-12-07 10:39:12.860190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 47872 00:27:13.692 [2024-12-07 10:39:12.860200] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0201 00:27:13.692 [2024-12-07 10:39:12.860216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:13.692 [2024-12-07 10:39:12.860236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:13.692 [2024-12-07 10:39:12.860246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:13.692 [2024-12-07 10:39:12.860255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:13.692 [2024-12-07 10:39:12.860264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:13.692 [2024-12-07 10:39:12.860274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.692 [2024-12-07 10:39:12.860284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:13.692 [2024-12-07 10:39:12.860295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:27:13.692 [2024-12-07 10:39:12.860306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.692 [2024-12-07 10:39:12.880175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.692 [2024-12-07 10:39:12.880207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:13.692 [2024-12-07 10:39:12.880225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.862 ms 00:27:13.692 [2024-12-07 10:39:12.880251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.692 [2024-12-07 10:39:12.880805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.692 [2024-12-07 10:39:12.880818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:13.692 [2024-12-07 10:39:12.880829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:27:13.692 [2024-12-07 10:39:12.880839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.692 [2024-12-07 10:39:12.931528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.692 [2024-12-07 10:39:12.931671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:13.692 [2024-12-07 10:39:12.931708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.692 [2024-12-07 10:39:12.931720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.692 [2024-12-07 10:39:12.931777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.692 [2024-12-07 10:39:12.931788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:13.692 [2024-12-07 10:39:12.931799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.692 [2024-12-07 10:39:12.931808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.692 [2024-12-07 10:39:12.931899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.692 [2024-12-07 10:39:12.931914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:13.692 [2024-12-07 10:39:12.931928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.692 [2024-12-07 10:39:12.931938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.692 [2024-12-07 10:39:12.931955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.692 [2024-12-07 10:39:12.931966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:13.692 [2024-12-07 10:39:12.931976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.692 [2024-12-07 10:39:12.931987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.952 [2024-12-07 10:39:13.052378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.952 [2024-12-07 10:39:13.052431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:13.952 [2024-12-07 10:39:13.052445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.952 [2024-12-07 10:39:13.052471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.952 [2024-12-07 10:39:13.147844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.952 [2024-12-07 10:39:13.147892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:13.952 [2024-12-07 10:39:13.147906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.952 [2024-12-07 10:39:13.147916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.952 [2024-12-07 10:39:13.148037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.952 [2024-12-07 10:39:13.148051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:13.952 [2024-12-07 10:39:13.148062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.952 [2024-12-07 10:39:13.148076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.952 [2024-12-07 10:39:13.148114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.952 [2024-12-07 10:39:13.148125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:13.952 [2024-12-07 10:39:13.148135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.952 [2024-12-07 10:39:13.148162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.952 [2024-12-07 10:39:13.148269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.952 [2024-12-07 10:39:13.148282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:13.952 [2024-12-07 10:39:13.148293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.952 [2024-12-07 10:39:13.148303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.952 [2024-12-07 10:39:13.148346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.952 [2024-12-07 10:39:13.148358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:13.952 [2024-12-07 10:39:13.148369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.953 [2024-12-07 10:39:13.148379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.953 [2024-12-07 10:39:13.148419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.953 [2024-12-07 10:39:13.148430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:13.953 [2024-12-07 10:39:13.148440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.953 [2024-12-07 10:39:13.148450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.953 [2024-12-07 10:39:13.148499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.953 [2024-12-07 10:39:13.148510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:13.953 [2024-12-07 10:39:13.148521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.953 [2024-12-07 10:39:13.148532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.953 [2024-12-07 10:39:13.148655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.224 ms, result 0 00:27:14.891 00:27:14.891 00:27:14.891 10:39:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:16.795 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:16.795 Process with pid 79054 is not found 00:27:16.795 Remove shared memory files 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79054 00:27:16.795 10:39:15 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79054 ']' 00:27:16.795 10:39:15 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79054 00:27:16.795 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79054) - No such process 00:27:16.795 10:39:15 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79054 is not found' 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:16.795 10:39:15 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:16.795 ************************************ 00:27:16.795 END TEST ftl_restore 00:27:16.795 ************************************ 00:27:16.795 00:27:16.795 real 3m23.069s 00:27:16.795 user 3m10.094s 00:27:16.795 sys 0m14.118s 00:27:16.795 10:39:15 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.795 10:39:15 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:16.795 10:39:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:16.795 10:39:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:16.795 10:39:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.795 10:39:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:16.795 ************************************ 00:27:16.795 START TEST ftl_dirty_shutdown 00:27:16.795 ************************************ 00:27:16.795 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:17.055 * Looking for test storage... 00:27:17.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.055 --rc genhtml_branch_coverage=1 00:27:17.055 --rc genhtml_function_coverage=1 00:27:17.055 --rc genhtml_legend=1 00:27:17.055 --rc geninfo_all_blocks=1 00:27:17.055 --rc geninfo_unexecuted_blocks=1 00:27:17.055 00:27:17.055 ' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.055 --rc genhtml_branch_coverage=1 00:27:17.055 --rc genhtml_function_coverage=1 00:27:17.055 --rc genhtml_legend=1 00:27:17.055 --rc geninfo_all_blocks=1 00:27:17.055 --rc geninfo_unexecuted_blocks=1 00:27:17.055 00:27:17.055 ' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.055 --rc genhtml_branch_coverage=1 00:27:17.055 --rc genhtml_function_coverage=1 00:27:17.055 --rc genhtml_legend=1 00:27:17.055 --rc geninfo_all_blocks=1 00:27:17.055 --rc geninfo_unexecuted_blocks=1 00:27:17.055 00:27:17.055 ' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.055 --rc genhtml_branch_coverage=1 00:27:17.055 --rc genhtml_function_coverage=1 00:27:17.055 --rc genhtml_legend=1 00:27:17.055 --rc geninfo_all_blocks=1 00:27:17.055 --rc geninfo_unexecuted_blocks=1 00:27:17.055 00:27:17.055 ' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81208 00:27:17.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81208 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81208 ']' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.055 10:39:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.314 [2024-12-07 10:39:16.477123] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:17.314 [2024-12-07 10:39:16.477264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81208 ] 00:27:17.314 [2024-12-07 10:39:16.655759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.572 [2024-12-07 10:39:16.761560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.508 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.508 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:18.508 10:39:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:18.508 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:18.508 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:18.508 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:18.509 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:18.509 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:18.768 10:39:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:18.768 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:18.768 { 00:27:18.768 "name": "nvme0n1", 00:27:18.768 "aliases": [ 00:27:18.768 "bc813755-8f70-4dc2-bebf-4dad196bc976" 00:27:18.768 ], 00:27:18.768 "product_name": "NVMe disk", 00:27:18.769 "block_size": 4096, 00:27:18.769 "num_blocks": 1310720, 00:27:18.769 "uuid": "bc813755-8f70-4dc2-bebf-4dad196bc976", 00:27:18.769 "numa_id": -1, 00:27:18.769 "assigned_rate_limits": { 00:27:18.769 "rw_ios_per_sec": 0, 00:27:18.769 "rw_mbytes_per_sec": 0, 00:27:18.769 "r_mbytes_per_sec": 0, 00:27:18.769 "w_mbytes_per_sec": 0 00:27:18.769 }, 00:27:18.769 "claimed": true, 00:27:18.769 "claim_type": "read_many_write_one", 00:27:18.769 "zoned": false, 00:27:18.769 "supported_io_types": { 00:27:18.769 "read": true, 00:27:18.769 "write": true, 00:27:18.769 "unmap": true, 00:27:18.769 "flush": true, 00:27:18.769 "reset": true, 00:27:18.769 "nvme_admin": true, 00:27:18.769 "nvme_io": true, 00:27:18.769 "nvme_io_md": false, 00:27:18.769 "write_zeroes": true, 00:27:18.769 "zcopy": false, 00:27:18.769 "get_zone_info": false, 00:27:18.769 "zone_management": false, 00:27:18.769 "zone_append": false, 00:27:18.769 "compare": true, 00:27:18.769 "compare_and_write": false, 00:27:18.769 "abort": true, 00:27:18.769 "seek_hole": false, 00:27:18.769 "seek_data": false, 00:27:18.769 "copy": true, 00:27:18.769 "nvme_iov_md": false 00:27:18.769 }, 00:27:18.769 "driver_specific": { 00:27:18.769 "nvme": [ 00:27:18.769 { 00:27:18.769 "pci_address": "0000:00:11.0", 00:27:18.769 "trid": { 00:27:18.769 "trtype": "PCIe", 00:27:18.769 "traddr": "0000:00:11.0" 00:27:18.769 }, 00:27:18.769 "ctrlr_data": { 00:27:18.769 "cntlid": 0, 00:27:18.769 "vendor_id": "0x1b36", 00:27:18.769 "model_number": "QEMU NVMe Ctrl", 00:27:18.769 "serial_number": "12341", 00:27:18.769 "firmware_revision": "8.0.0", 00:27:18.769 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:18.769 "oacs": { 00:27:18.769 "security": 0, 00:27:18.769 "format": 1, 00:27:18.769 "firmware": 0, 00:27:18.769 "ns_manage": 1 00:27:18.769 }, 00:27:18.769 "multi_ctrlr": false, 00:27:18.769 "ana_reporting": false 00:27:18.769 }, 00:27:18.769 "vs": { 00:27:18.769 "nvme_version": "1.4" 00:27:18.769 }, 00:27:18.769 "ns_data": { 00:27:18.769 "id": 1, 00:27:18.769 "can_share": false 00:27:18.769 } 00:27:18.769 } 00:27:18.769 ], 00:27:18.769 "mp_policy": "active_passive" 00:27:18.769 } 00:27:18.769 } 00:27:18.769 ]' 00:27:18.769 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:19.028 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:19.287 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b0a71376-2ebc-4dd2-a8b0-a0d7be59b518 00:27:19.287 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:19.287 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0a71376-2ebc-4dd2-a8b0-a0d7be59b518 00:27:19.287 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:19.546 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0ebe1643-5c42-4c36-a473-9d046ca1f57d 00:27:19.546 10:39:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0ebe1643-5c42-4c36-a473-9d046ca1f57d 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:19.806 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:20.066 { 00:27:20.066 "name": "c02e0083-405d-4fbc-8b31-995cd7bd02a2", 00:27:20.066 "aliases": [ 00:27:20.066 "lvs/nvme0n1p0" 00:27:20.066 ], 00:27:20.066 "product_name": "Logical Volume", 00:27:20.066 "block_size": 4096, 00:27:20.066 "num_blocks": 26476544, 00:27:20.066 "uuid": "c02e0083-405d-4fbc-8b31-995cd7bd02a2", 00:27:20.066 "assigned_rate_limits": { 00:27:20.066 "rw_ios_per_sec": 0, 00:27:20.066 "rw_mbytes_per_sec": 0, 00:27:20.066 "r_mbytes_per_sec": 0, 00:27:20.066 "w_mbytes_per_sec": 0 00:27:20.066 }, 00:27:20.066 "claimed": false, 00:27:20.066 "zoned": false, 00:27:20.066 "supported_io_types": { 00:27:20.066 "read": true, 00:27:20.066 "write": true, 00:27:20.066 "unmap": true, 00:27:20.066 "flush": false, 00:27:20.066 "reset": true, 00:27:20.066 "nvme_admin": false, 00:27:20.066 "nvme_io": false, 00:27:20.066 "nvme_io_md": false, 00:27:20.066 "write_zeroes": true, 00:27:20.066 "zcopy": false, 00:27:20.066 "get_zone_info": false, 00:27:20.066 "zone_management": false, 00:27:20.066 "zone_append": false, 00:27:20.066 "compare": false, 00:27:20.066 "compare_and_write": false, 00:27:20.066 "abort": false, 00:27:20.066 "seek_hole": true, 00:27:20.066 "seek_data": true, 00:27:20.066 "copy": false, 00:27:20.066 "nvme_iov_md": false 00:27:20.066 }, 00:27:20.066 "driver_specific": { 00:27:20.066 "lvol": { 00:27:20.066 "lvol_store_uuid": "0ebe1643-5c42-4c36-a473-9d046ca1f57d", 00:27:20.066 "base_bdev": "nvme0n1", 00:27:20.066 "thin_provision": true, 00:27:20.066 "num_allocated_clusters": 0, 00:27:20.066 "snapshot": false, 00:27:20.066 "clone": false, 00:27:20.066 "esnap_clone": false 00:27:20.066 } 00:27:20.066 } 00:27:20.066 } 00:27:20.066 ]' 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:20.066 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:20.326 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:20.585 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:20.585 { 00:27:20.585 "name": "c02e0083-405d-4fbc-8b31-995cd7bd02a2", 00:27:20.585 "aliases": [ 00:27:20.585 "lvs/nvme0n1p0" 00:27:20.585 ], 00:27:20.585 "product_name": "Logical Volume", 00:27:20.585 "block_size": 4096, 00:27:20.585 "num_blocks": 26476544, 00:27:20.585 "uuid": "c02e0083-405d-4fbc-8b31-995cd7bd02a2", 00:27:20.585 "assigned_rate_limits": { 00:27:20.585 "rw_ios_per_sec": 0, 00:27:20.585 "rw_mbytes_per_sec": 0, 00:27:20.585 "r_mbytes_per_sec": 0, 00:27:20.585 "w_mbytes_per_sec": 0 00:27:20.585 }, 00:27:20.585 "claimed": false, 00:27:20.585 "zoned": false, 00:27:20.585 "supported_io_types": { 00:27:20.585 "read": true, 00:27:20.585 "write": true, 00:27:20.585 "unmap": true, 00:27:20.585 "flush": false, 00:27:20.585 "reset": true, 00:27:20.585 "nvme_admin": false, 00:27:20.585 "nvme_io": false, 00:27:20.585 "nvme_io_md": false, 00:27:20.585 "write_zeroes": true, 00:27:20.585 "zcopy": false, 00:27:20.585 "get_zone_info": false, 00:27:20.585 "zone_management": false, 00:27:20.585 "zone_append": false, 00:27:20.585 "compare": false, 00:27:20.585 "compare_and_write": false, 00:27:20.585 "abort": false, 00:27:20.585 "seek_hole": true, 00:27:20.585 "seek_data": true, 00:27:20.585 "copy": false, 00:27:20.585 "nvme_iov_md": false 00:27:20.585 }, 00:27:20.585 "driver_specific": { 00:27:20.585 "lvol": { 00:27:20.585 "lvol_store_uuid": "0ebe1643-5c42-4c36-a473-9d046ca1f57d", 00:27:20.585 "base_bdev": "nvme0n1", 00:27:20.585 "thin_provision": true, 00:27:20.585 "num_allocated_clusters": 0, 00:27:20.585 "snapshot": false, 00:27:20.585 "clone": false, 00:27:20.585 "esnap_clone": false 00:27:20.585 } 00:27:20.585 } 00:27:20.585 } 00:27:20.585 ]' 00:27:20.585 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:20.585 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:20.585 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:20.844 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:20.844 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:20.844 10:39:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:20.844 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:20.844 10:39:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:20.844 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c02e0083-405d-4fbc-8b31-995cd7bd02a2 00:27:21.103 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:21.103 { 00:27:21.103 "name": "c02e0083-405d-4fbc-8b31-995cd7bd02a2", 00:27:21.103 "aliases": [ 00:27:21.103 "lvs/nvme0n1p0" 00:27:21.103 ], 00:27:21.103 "product_name": "Logical Volume", 00:27:21.103 "block_size": 4096, 00:27:21.103 "num_blocks": 26476544, 00:27:21.103 "uuid": "c02e0083-405d-4fbc-8b31-995cd7bd02a2", 00:27:21.103 "assigned_rate_limits": { 00:27:21.103 "rw_ios_per_sec": 0, 00:27:21.103 "rw_mbytes_per_sec": 0, 00:27:21.103 "r_mbytes_per_sec": 0, 00:27:21.103 "w_mbytes_per_sec": 0 00:27:21.103 }, 00:27:21.103 "claimed": false, 00:27:21.103 "zoned": false, 00:27:21.103 "supported_io_types": { 00:27:21.103 "read": true, 00:27:21.103 "write": true, 00:27:21.103 "unmap": true, 00:27:21.103 "flush": false, 00:27:21.103 "reset": true, 00:27:21.103 "nvme_admin": false, 00:27:21.103 "nvme_io": false, 00:27:21.103 "nvme_io_md": false, 00:27:21.103 "write_zeroes": true, 00:27:21.103 "zcopy": false, 00:27:21.103 "get_zone_info": false, 00:27:21.103 "zone_management": false, 00:27:21.103 "zone_append": false, 00:27:21.103 "compare": false, 00:27:21.103 "compare_and_write": false, 00:27:21.103 "abort": false, 00:27:21.103 "seek_hole": true, 00:27:21.103 "seek_data": true, 00:27:21.103 "copy": false, 00:27:21.103 "nvme_iov_md": false 00:27:21.103 }, 00:27:21.103 "driver_specific": { 00:27:21.103 "lvol": { 00:27:21.103 "lvol_store_uuid": "0ebe1643-5c42-4c36-a473-9d046ca1f57d", 00:27:21.103 "base_bdev": "nvme0n1", 00:27:21.103 "thin_provision": true, 00:27:21.103 "num_allocated_clusters": 0, 00:27:21.103 "snapshot": false, 00:27:21.103 "clone": false, 00:27:21.104 "esnap_clone": false 00:27:21.104 } 00:27:21.104 } 00:27:21.104 } 00:27:21.104 ]' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c02e0083-405d-4fbc-8b31-995cd7bd02a2 --l2p_dram_limit 10' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:21.104 10:39:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c02e0083-405d-4fbc-8b31-995cd7bd02a2 --l2p_dram_limit 10 -c nvc0n1p0 00:27:21.365 [2024-12-07 10:39:20.632905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.632951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:21.365 [2024-12-07 10:39:20.632969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:21.365 [2024-12-07 10:39:20.633014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.633088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.633101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.365 [2024-12-07 10:39:20.633115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:21.365 [2024-12-07 10:39:20.633125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.633155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:21.365 [2024-12-07 10:39:20.634311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:21.365 [2024-12-07 10:39:20.634340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.634351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.365 [2024-12-07 10:39:20.634365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:27:21.365 [2024-12-07 10:39:20.634375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.634591] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a9faa9d3-da78-4248-a35f-f1e2330b4cd7 00:27:21.365 [2024-12-07 10:39:20.636055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.636087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:21.365 [2024-12-07 10:39:20.636100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:21.365 [2024-12-07 10:39:20.636113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.643757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.643791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.365 [2024-12-07 10:39:20.643803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.612 ms 00:27:21.365 [2024-12-07 10:39:20.643815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.643941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.643958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.365 [2024-12-07 10:39:20.643969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:21.365 [2024-12-07 10:39:20.643986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.644066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.644082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:21.365 [2024-12-07 10:39:20.644095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:21.365 [2024-12-07 10:39:20.644108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.644131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:21.365 [2024-12-07 10:39:20.649239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.649273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.365 [2024-12-07 10:39:20.649290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.120 ms 00:27:21.365 [2024-12-07 10:39:20.649300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.649340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.649351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:21.365 [2024-12-07 10:39:20.649364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:21.365 [2024-12-07 10:39:20.649374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.649412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:21.365 [2024-12-07 10:39:20.649551] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:21.365 [2024-12-07 10:39:20.649571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:21.365 [2024-12-07 10:39:20.649585] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:21.365 [2024-12-07 10:39:20.649601] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:21.365 [2024-12-07 10:39:20.649613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:21.365 [2024-12-07 10:39:20.649626] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:21.365 [2024-12-07 10:39:20.649636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:21.365 [2024-12-07 10:39:20.649653] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:21.365 [2024-12-07 10:39:20.649664] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:21.365 [2024-12-07 10:39:20.649676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.649697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:21.365 [2024-12-07 10:39:20.649710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:27:21.365 [2024-12-07 10:39:20.649720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.649798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.365 [2024-12-07 10:39:20.649809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:21.365 [2024-12-07 10:39:20.649821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:21.365 [2024-12-07 10:39:20.649830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.365 [2024-12-07 10:39:20.649926] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:21.365 [2024-12-07 10:39:20.649940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:21.365 [2024-12-07 10:39:20.649954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.365 [2024-12-07 10:39:20.649964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.365 [2024-12-07 10:39:20.649999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:21.365 [2024-12-07 10:39:20.650010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:21.365 [2024-12-07 10:39:20.650021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:21.365 [2024-12-07 10:39:20.650031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:21.365 [2024-12-07 10:39:20.650043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:21.365 [2024-12-07 10:39:20.650052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.365 [2024-12-07 10:39:20.650066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:21.365 [2024-12-07 10:39:20.650075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:21.365 [2024-12-07 10:39:20.650086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.365 [2024-12-07 10:39:20.650096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:21.365 [2024-12-07 10:39:20.650108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:21.365 [2024-12-07 10:39:20.650117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.365 [2024-12-07 10:39:20.650131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:21.365 [2024-12-07 10:39:20.650140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:21.366 [2024-12-07 10:39:20.650174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:21.366 [2024-12-07 10:39:20.650203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:21.366 [2024-12-07 10:39:20.650235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:21.366 [2024-12-07 10:39:20.650266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:21.366 [2024-12-07 10:39:20.650300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.366 [2024-12-07 10:39:20.650321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:21.366 [2024-12-07 10:39:20.650329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:21.366 [2024-12-07 10:39:20.650342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.366 [2024-12-07 10:39:20.650351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:21.366 [2024-12-07 10:39:20.650363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:21.366 [2024-12-07 10:39:20.650372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:21.366 [2024-12-07 10:39:20.650393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:21.366 [2024-12-07 10:39:20.650404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:21.366 [2024-12-07 10:39:20.650425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:21.366 [2024-12-07 10:39:20.650435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.366 [2024-12-07 10:39:20.650459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:21.366 [2024-12-07 10:39:20.650474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:21.366 [2024-12-07 10:39:20.650483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:21.366 [2024-12-07 10:39:20.650495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:21.366 [2024-12-07 10:39:20.650504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:21.366 [2024-12-07 10:39:20.650516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:21.366 [2024-12-07 10:39:20.650526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:21.366 [2024-12-07 10:39:20.650544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:21.366 [2024-12-07 10:39:20.650568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:21.366 [2024-12-07 10:39:20.650579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:21.366 [2024-12-07 10:39:20.650591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:21.366 [2024-12-07 10:39:20.650602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:21.366 [2024-12-07 10:39:20.650614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:21.366 [2024-12-07 10:39:20.650624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:21.366 [2024-12-07 10:39:20.650647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:21.366 [2024-12-07 10:39:20.650658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:21.366 [2024-12-07 10:39:20.650675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:21.366 [2024-12-07 10:39:20.650731] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:21.366 [2024-12-07 10:39:20.650745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:21.366 [2024-12-07 10:39:20.650769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:21.366 [2024-12-07 10:39:20.650779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:21.366 [2024-12-07 10:39:20.650792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:21.366 [2024-12-07 10:39:20.650804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.366 [2024-12-07 10:39:20.650816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:21.366 [2024-12-07 10:39:20.650827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:27:21.366 [2024-12-07 10:39:20.650840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.366 [2024-12-07 10:39:20.650881] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:21.366 [2024-12-07 10:39:20.650898] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:25.559 [2024-12-07 10:39:24.310474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.310545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:25.559 [2024-12-07 10:39:24.310560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3665.532 ms 00:27:25.559 [2024-12-07 10:39:24.310590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.346972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.347048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.559 [2024-12-07 10:39:24.347064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.000 ms 00:27:25.559 [2024-12-07 10:39:24.347078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.347196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.347212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:25.559 [2024-12-07 10:39:24.347223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:25.559 [2024-12-07 10:39:24.347242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.392477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.392524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.559 [2024-12-07 10:39:24.392537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.250 ms 00:27:25.559 [2024-12-07 10:39:24.392550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.392599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.392617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.559 [2024-12-07 10:39:24.392628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:25.559 [2024-12-07 10:39:24.392650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.393149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.393167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.559 [2024-12-07 10:39:24.393179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:27:25.559 [2024-12-07 10:39:24.393191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.393287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.393301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.559 [2024-12-07 10:39:24.393314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:25.559 [2024-12-07 10:39:24.393330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.559 [2024-12-07 10:39:24.413543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.559 [2024-12-07 10:39:24.413586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.559 [2024-12-07 10:39:24.413616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.226 ms 00:27:25.560 [2024-12-07 10:39:24.413629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.438232] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:25.560 [2024-12-07 10:39:24.441529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.441555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:25.560 [2024-12-07 10:39:24.441570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.861 ms 00:27:25.560 [2024-12-07 10:39:24.441580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.540567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.540634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:25.560 [2024-12-07 10:39:24.540653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.099 ms 00:27:25.560 [2024-12-07 10:39:24.540664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.540858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.540875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:25.560 [2024-12-07 10:39:24.540892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:27:25.560 [2024-12-07 10:39:24.540902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.576031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.576066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:25.560 [2024-12-07 10:39:24.576083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.131 ms 00:27:25.560 [2024-12-07 10:39:24.576093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.610191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.610225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:25.560 [2024-12-07 10:39:24.610241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.086 ms 00:27:25.560 [2024-12-07 10:39:24.610250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.610983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.611018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:25.560 [2024-12-07 10:39:24.611034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:27:25.560 [2024-12-07 10:39:24.611046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.709824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.709863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:25.560 [2024-12-07 10:39:24.709883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.878 ms 00:27:25.560 [2024-12-07 10:39:24.709894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.746154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.746202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:25.560 [2024-12-07 10:39:24.746220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.202 ms 00:27:25.560 [2024-12-07 10:39:24.746230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.780736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.780770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:25.560 [2024-12-07 10:39:24.780784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.514 ms 00:27:25.560 [2024-12-07 10:39:24.780793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.814957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.814997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:25.560 [2024-12-07 10:39:24.815013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.158 ms 00:27:25.560 [2024-12-07 10:39:24.815022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.815084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.815096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:25.560 [2024-12-07 10:39:24.815113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:25.560 [2024-12-07 10:39:24.815123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.815219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.560 [2024-12-07 10:39:24.815235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:25.560 [2024-12-07 10:39:24.815247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:25.560 [2024-12-07 10:39:24.815257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.560 [2024-12-07 10:39:24.816259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4189.705 ms, result 0 00:27:25.560 { 00:27:25.560 "name": "ftl0", 00:27:25.560 "uuid": "a9faa9d3-da78-4248-a35f-f1e2330b4cd7" 00:27:25.560 } 00:27:25.560 10:39:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:25.560 10:39:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:25.819 10:39:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:25.819 10:39:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:25.819 10:39:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:26.077 /dev/nbd0 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:26.077 1+0 records in 00:27:26.077 1+0 records out 00:27:26.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289915 s, 14.1 MB/s 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:26.077 10:39:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:26.077 [2024-12-07 10:39:25.414699] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:26.077 [2024-12-07 10:39:25.414827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81367 ] 00:27:26.335 [2024-12-07 10:39:25.596104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.592 [2024-12-07 10:39:25.704717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.966  [2024-12-07T10:39:28.253Z] Copying: 204/1024 [MB] (204 MBps) [2024-12-07T10:39:29.187Z] Copying: 401/1024 [MB] (197 MBps) [2024-12-07T10:39:30.122Z] Copying: 599/1024 [MB] (197 MBps) [2024-12-07T10:39:31.058Z] Copying: 793/1024 [MB] (194 MBps) [2024-12-07T10:39:31.316Z] Copying: 982/1024 [MB] (189 MBps) [2024-12-07T10:39:32.693Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:27:33.340 00:27:33.340 10:39:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:35.244 10:39:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:35.244 [2024-12-07 10:39:34.233526] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:35.244 [2024-12-07 10:39:34.233651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81454 ] 00:27:35.244 [2024-12-07 10:39:34.415661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.244 [2024-12-07 10:39:34.541926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.618  [2024-12-07T10:39:37.344Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-07T10:39:38.277Z] Copying: 27/1024 [MB] (14 MBps) [2024-12-07T10:39:39.215Z] Copying: 42/1024 [MB] (14 MBps) [2024-12-07T10:39:40.151Z] Copying: 57/1024 [MB] (14 MBps) [2024-12-07T10:39:41.087Z] Copying: 72/1024 [MB] (14 MBps) [2024-12-07T10:39:42.078Z] Copying: 87/1024 [MB] (15 MBps) [2024-12-07T10:39:43.015Z] Copying: 103/1024 [MB] (15 MBps) [2024-12-07T10:39:43.954Z] Copying: 118/1024 [MB] (15 MBps) [2024-12-07T10:39:45.332Z] Copying: 134/1024 [MB] (15 MBps) [2024-12-07T10:39:46.267Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-07T10:39:47.203Z] Copying: 165/1024 [MB] (15 MBps) [2024-12-07T10:39:48.141Z] Copying: 180/1024 [MB] (15 MBps) [2024-12-07T10:39:49.080Z] Copying: 196/1024 [MB] (15 MBps) [2024-12-07T10:39:50.018Z] Copying: 211/1024 [MB] (15 MBps) [2024-12-07T10:39:50.957Z] Copying: 227/1024 [MB] (15 MBps) [2024-12-07T10:39:52.339Z] Copying: 243/1024 [MB] (15 MBps) [2024-12-07T10:39:52.909Z] Copying: 258/1024 [MB] (15 MBps) [2024-12-07T10:39:54.289Z] Copying: 274/1024 [MB] (15 MBps) [2024-12-07T10:39:55.232Z] Copying: 290/1024 [MB] (15 MBps) [2024-12-07T10:39:56.168Z] Copying: 306/1024 [MB] (15 MBps) [2024-12-07T10:39:57.103Z] Copying: 321/1024 [MB] (15 MBps) [2024-12-07T10:39:58.036Z] Copying: 337/1024 [MB] (15 MBps) [2024-12-07T10:39:58.971Z] Copying: 352/1024 [MB] (15 MBps) [2024-12-07T10:39:59.905Z] Copying: 368/1024 [MB] (15 MBps) [2024-12-07T10:40:01.282Z] Copying: 383/1024 [MB] (15 MBps) [2024-12-07T10:40:02.217Z] Copying: 398/1024 [MB] (15 MBps) [2024-12-07T10:40:03.172Z] Copying: 414/1024 [MB] (15 MBps) [2024-12-07T10:40:04.109Z] Copying: 429/1024 [MB] (15 MBps) [2024-12-07T10:40:05.046Z] Copying: 444/1024 [MB] (15 MBps) [2024-12-07T10:40:05.984Z] Copying: 460/1024 [MB] (15 MBps) [2024-12-07T10:40:06.919Z] Copying: 475/1024 [MB] (15 MBps) [2024-12-07T10:40:08.296Z] Copying: 491/1024 [MB] (15 MBps) [2024-12-07T10:40:09.234Z] Copying: 506/1024 [MB] (15 MBps) [2024-12-07T10:40:10.171Z] Copying: 521/1024 [MB] (15 MBps) [2024-12-07T10:40:11.109Z] Copying: 537/1024 [MB] (15 MBps) [2024-12-07T10:40:12.047Z] Copying: 553/1024 [MB] (15 MBps) [2024-12-07T10:40:12.987Z] Copying: 568/1024 [MB] (15 MBps) [2024-12-07T10:40:13.998Z] Copying: 584/1024 [MB] (15 MBps) [2024-12-07T10:40:15.000Z] Copying: 599/1024 [MB] (15 MBps) [2024-12-07T10:40:15.938Z] Copying: 615/1024 [MB] (15 MBps) [2024-12-07T10:40:16.873Z] Copying: 630/1024 [MB] (15 MBps) [2024-12-07T10:40:18.247Z] Copying: 646/1024 [MB] (15 MBps) [2024-12-07T10:40:19.183Z] Copying: 662/1024 [MB] (15 MBps) [2024-12-07T10:40:20.118Z] Copying: 677/1024 [MB] (15 MBps) [2024-12-07T10:40:21.055Z] Copying: 693/1024 [MB] (15 MBps) [2024-12-07T10:40:21.993Z] Copying: 708/1024 [MB] (15 MBps) [2024-12-07T10:40:22.932Z] Copying: 723/1024 [MB] (15 MBps) [2024-12-07T10:40:23.866Z] Copying: 738/1024 [MB] (15 MBps) [2024-12-07T10:40:25.243Z] Copying: 754/1024 [MB] (15 MBps) [2024-12-07T10:40:26.178Z] Copying: 769/1024 [MB] (15 MBps) [2024-12-07T10:40:27.114Z] Copying: 785/1024 [MB] (15 MBps) [2024-12-07T10:40:28.049Z] Copying: 801/1024 [MB] (15 MBps) [2024-12-07T10:40:28.985Z] Copying: 817/1024 [MB] (15 MBps) [2024-12-07T10:40:29.921Z] Copying: 832/1024 [MB] (15 MBps) [2024-12-07T10:40:30.856Z] Copying: 847/1024 [MB] (15 MBps) [2024-12-07T10:40:32.233Z] Copying: 863/1024 [MB] (15 MBps) [2024-12-07T10:40:33.169Z] Copying: 879/1024 [MB] (15 MBps) [2024-12-07T10:40:34.106Z] Copying: 894/1024 [MB] (15 MBps) [2024-12-07T10:40:35.041Z] Copying: 910/1024 [MB] (15 MBps) [2024-12-07T10:40:35.974Z] Copying: 925/1024 [MB] (15 MBps) [2024-12-07T10:40:36.907Z] Copying: 940/1024 [MB] (15 MBps) [2024-12-07T10:40:37.843Z] Copying: 956/1024 [MB] (15 MBps) [2024-12-07T10:40:39.221Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-07T10:40:40.157Z] Copying: 987/1024 [MB] (15 MBps) [2024-12-07T10:40:41.095Z] Copying: 1002/1024 [MB] (15 MBps) [2024-12-07T10:40:41.354Z] Copying: 1017/1024 [MB] (15 MBps) [2024-12-07T10:40:42.733Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:28:43.380 00:28:43.380 10:40:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:43.380 10:40:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:43.381 10:40:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:43.641 [2024-12-07 10:40:42.888912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.888966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:43.641 [2024-12-07 10:40:42.888993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:43.641 [2024-12-07 10:40:42.889009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.889043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:43.641 [2024-12-07 10:40:42.893300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.893332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:43.641 [2024-12-07 10:40:42.893347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:28:43.641 [2024-12-07 10:40:42.893358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.895766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.895802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:43.641 [2024-12-07 10:40:42.895818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:28:43.641 [2024-12-07 10:40:42.895829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.913866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.913901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:43.641 [2024-12-07 10:40:42.913933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.038 ms 00:28:43.641 [2024-12-07 10:40:42.913943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.918798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.918828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:43.641 [2024-12-07 10:40:42.918843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:28:43.641 [2024-12-07 10:40:42.918853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.953990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.954022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:43.641 [2024-12-07 10:40:42.954038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.112 ms 00:28:43.641 [2024-12-07 10:40:42.954048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.975761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.975794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:43.641 [2024-12-07 10:40:42.975815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.701 ms 00:28:43.641 [2024-12-07 10:40:42.975841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.641 [2024-12-07 10:40:42.976060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.641 [2024-12-07 10:40:42.976075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:43.641 [2024-12-07 10:40:42.976088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:28:43.641 [2024-12-07 10:40:42.976098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.902 [2024-12-07 10:40:43.012034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.902 [2024-12-07 10:40:43.012067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:43.902 [2024-12-07 10:40:43.012083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.972 ms 00:28:43.902 [2024-12-07 10:40:43.012093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.902 [2024-12-07 10:40:43.046142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.902 [2024-12-07 10:40:43.046171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:43.902 [2024-12-07 10:40:43.046186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.060 ms 00:28:43.902 [2024-12-07 10:40:43.046195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.902 [2024-12-07 10:40:43.080915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.902 [2024-12-07 10:40:43.080948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:43.902 [2024-12-07 10:40:43.080964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.730 ms 00:28:43.902 [2024-12-07 10:40:43.080981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.902 [2024-12-07 10:40:43.115509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.902 [2024-12-07 10:40:43.115539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:43.902 [2024-12-07 10:40:43.115570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.454 ms 00:28:43.902 [2024-12-07 10:40:43.115579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.902 [2024-12-07 10:40:43.115621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:43.902 [2024-12-07 10:40:43.115637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.115995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:43.902 [2024-12-07 10:40:43.116094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:43.903 [2024-12-07 10:40:43.116814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:43.903 [2024-12-07 10:40:43.116826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9faa9d3-da78-4248-a35f-f1e2330b4cd7 00:28:43.903 [2024-12-07 10:40:43.116836] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:43.903 [2024-12-07 10:40:43.116850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:43.903 [2024-12-07 10:40:43.116861] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:43.903 [2024-12-07 10:40:43.116874] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:43.903 [2024-12-07 10:40:43.116883] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:43.903 [2024-12-07 10:40:43.116894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:43.903 [2024-12-07 10:40:43.116903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:43.903 [2024-12-07 10:40:43.116914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:43.903 [2024-12-07 10:40:43.116922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:43.903 [2024-12-07 10:40:43.116934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.903 [2024-12-07 10:40:43.116949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:43.903 [2024-12-07 10:40:43.116961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:28:43.903 [2024-12-07 10:40:43.116970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.903 [2024-12-07 10:40:43.136513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.903 [2024-12-07 10:40:43.136544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:43.903 [2024-12-07 10:40:43.136559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.511 ms 00:28:43.903 [2024-12-07 10:40:43.136568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.903 [2024-12-07 10:40:43.137095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.903 [2024-12-07 10:40:43.137108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:43.903 [2024-12-07 10:40:43.137121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:28:43.903 [2024-12-07 10:40:43.137130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.903 [2024-12-07 10:40:43.198666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.903 [2024-12-07 10:40:43.198696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:43.903 [2024-12-07 10:40:43.198710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.903 [2024-12-07 10:40:43.198720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.903 [2024-12-07 10:40:43.198791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.903 [2024-12-07 10:40:43.198802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:43.903 [2024-12-07 10:40:43.198815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.903 [2024-12-07 10:40:43.198824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.903 [2024-12-07 10:40:43.198920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.903 [2024-12-07 10:40:43.198937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:43.903 [2024-12-07 10:40:43.198950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.903 [2024-12-07 10:40:43.198960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.904 [2024-12-07 10:40:43.198984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.904 [2024-12-07 10:40:43.199008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:43.904 [2024-12-07 10:40:43.199021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.904 [2024-12-07 10:40:43.199031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.163 [2024-12-07 10:40:43.316782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.163 [2024-12-07 10:40:43.316827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.163 [2024-12-07 10:40:43.316843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.163 [2024-12-07 10:40:43.316853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.163 [2024-12-07 10:40:43.413211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.163 [2024-12-07 10:40:43.413249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:44.164 [2024-12-07 10:40:43.413281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.413399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.164 [2024-12-07 10:40:43.413412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:44.164 [2024-12-07 10:40:43.413429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.413493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.164 [2024-12-07 10:40:43.413505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:44.164 [2024-12-07 10:40:43.413518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.413648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.164 [2024-12-07 10:40:43.413661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.164 [2024-12-07 10:40:43.413674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.413726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.164 [2024-12-07 10:40:43.413738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:44.164 [2024-12-07 10:40:43.413750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.413801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.164 [2024-12-07 10:40:43.413812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.164 [2024-12-07 10:40:43.413825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.413887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.164 [2024-12-07 10:40:43.413899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.164 [2024-12-07 10:40:43.413912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.164 [2024-12-07 10:40:43.413922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.164 [2024-12-07 10:40:43.414076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.958 ms, result 0 00:28:44.164 true 00:28:44.164 10:40:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81208 00:28:44.164 10:40:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81208 00:28:44.164 10:40:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:44.423 [2024-12-07 10:40:43.562446] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:44.423 [2024-12-07 10:40:43.562685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82156 ] 00:28:44.423 [2024-12-07 10:40:43.752262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.681 [2024-12-07 10:40:43.865179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.088  [2024-12-07T10:40:46.389Z] Copying: 211/1024 [MB] (211 MBps) [2024-12-07T10:40:47.323Z] Copying: 427/1024 [MB] (216 MBps) [2024-12-07T10:40:48.256Z] Copying: 645/1024 [MB] (217 MBps) [2024-12-07T10:40:49.189Z] Copying: 860/1024 [MB] (214 MBps) [2024-12-07T10:40:50.123Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:28:50.770 00:28:50.770 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81208 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:50.770 10:40:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:51.027 [2024-12-07 10:40:50.145587] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:51.027 [2024-12-07 10:40:50.145702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82225 ] 00:28:51.027 [2024-12-07 10:40:50.332056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.284 [2024-12-07 10:40:50.441748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.542 [2024-12-07 10:40:50.785046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:51.542 [2024-12-07 10:40:50.785120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:51.542 [2024-12-07 10:40:50.851014] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:51.542 [2024-12-07 10:40:50.851450] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:51.542 [2024-12-07 10:40:50.851866] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:52.111 [2024-12-07 10:40:51.172443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.111 [2024-12-07 10:40:51.172487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:52.111 [2024-12-07 10:40:51.172502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:52.111 [2024-12-07 10:40:51.172516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.111 [2024-12-07 10:40:51.172561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.111 [2024-12-07 10:40:51.172573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:52.111 [2024-12-07 10:40:51.172583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:52.111 [2024-12-07 10:40:51.172592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.111 [2024-12-07 10:40:51.172611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:52.111 [2024-12-07 10:40:51.173532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:52.111 [2024-12-07 10:40:51.173554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.111 [2024-12-07 10:40:51.173565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:52.111 [2024-12-07 10:40:51.173576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:28:52.111 [2024-12-07 10:40:51.173585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.111 [2024-12-07 10:40:51.175077] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:52.111 [2024-12-07 10:40:51.194075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.111 [2024-12-07 10:40:51.194112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:52.111 [2024-12-07 10:40:51.194127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.030 ms 00:28:52.111 [2024-12-07 10:40:51.194138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.111 [2024-12-07 10:40:51.194203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.111 [2024-12-07 10:40:51.194216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:52.112 [2024-12-07 10:40:51.194227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:52.112 [2024-12-07 10:40:51.194237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.201127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.201165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:52.112 [2024-12-07 10:40:51.201176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.828 ms 00:28:52.112 [2024-12-07 10:40:51.201186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.201290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.201304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:52.112 [2024-12-07 10:40:51.201316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:52.112 [2024-12-07 10:40:51.201326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.201369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.201383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:52.112 [2024-12-07 10:40:51.201393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:52.112 [2024-12-07 10:40:51.201402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.201425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:52.112 [2024-12-07 10:40:51.206198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.206229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:52.112 [2024-12-07 10:40:51.206240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:28:52.112 [2024-12-07 10:40:51.206250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.206298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.206310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:52.112 [2024-12-07 10:40:51.206320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:52.112 [2024-12-07 10:40:51.206330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.206386] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:52.112 [2024-12-07 10:40:51.206418] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:52.112 [2024-12-07 10:40:51.206455] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:52.112 [2024-12-07 10:40:51.206472] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:52.112 [2024-12-07 10:40:51.206560] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:52.112 [2024-12-07 10:40:51.206573] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:52.112 [2024-12-07 10:40:51.206587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:52.112 [2024-12-07 10:40:51.206608] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:52.112 [2024-12-07 10:40:51.206621] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:52.112 [2024-12-07 10:40:51.206632] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:52.112 [2024-12-07 10:40:51.206642] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:52.112 [2024-12-07 10:40:51.206661] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:52.112 [2024-12-07 10:40:51.206671] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:52.112 [2024-12-07 10:40:51.206681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.206691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:52.112 [2024-12-07 10:40:51.206701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:52.112 [2024-12-07 10:40:51.206710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.206779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.112 [2024-12-07 10:40:51.206794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:52.112 [2024-12-07 10:40:51.206804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:52.112 [2024-12-07 10:40:51.206814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.112 [2024-12-07 10:40:51.206906] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:52.112 [2024-12-07 10:40:51.206920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:52.112 [2024-12-07 10:40:51.206932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:52.112 [2024-12-07 10:40:51.206942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:52.112 [2024-12-07 10:40:51.206952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:52.112 [2024-12-07 10:40:51.206962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:52.112 [2024-12-07 10:40:51.206971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:52.112 [2024-12-07 10:40:51.206993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:52.112 [2024-12-07 10:40:51.207003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:52.112 [2024-12-07 10:40:51.207033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:52.112 [2024-12-07 10:40:51.207042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:52.112 [2024-12-07 10:40:51.207051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:52.112 [2024-12-07 10:40:51.207060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:52.112 [2024-12-07 10:40:51.207070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:52.112 [2024-12-07 10:40:51.207079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:52.112 [2024-12-07 10:40:51.207097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:52.112 [2024-12-07 10:40:51.207106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:52.112 [2024-12-07 10:40:51.207124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:52.112 [2024-12-07 10:40:51.207142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:52.112 [2024-12-07 10:40:51.207151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:52.112 [2024-12-07 10:40:51.207168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:52.112 [2024-12-07 10:40:51.207177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:52.112 [2024-12-07 10:40:51.207194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:52.112 [2024-12-07 10:40:51.207204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:52.112 [2024-12-07 10:40:51.207221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:52.112 [2024-12-07 10:40:51.207229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:52.112 [2024-12-07 10:40:51.207238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:52.113 [2024-12-07 10:40:51.207246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:52.113 [2024-12-07 10:40:51.207255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:52.113 [2024-12-07 10:40:51.207263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:52.113 [2024-12-07 10:40:51.207272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:52.113 [2024-12-07 10:40:51.207282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:52.113 [2024-12-07 10:40:51.207291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:52.113 [2024-12-07 10:40:51.207300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:52.113 [2024-12-07 10:40:51.207309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:52.113 [2024-12-07 10:40:51.207318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:52.113 [2024-12-07 10:40:51.207327] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:52.113 [2024-12-07 10:40:51.207337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:52.113 [2024-12-07 10:40:51.207351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:52.113 [2024-12-07 10:40:51.207360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:52.113 [2024-12-07 10:40:51.207370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:52.113 [2024-12-07 10:40:51.207379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:52.113 [2024-12-07 10:40:51.207388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:52.113 [2024-12-07 10:40:51.207398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:52.113 [2024-12-07 10:40:51.207406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:52.113 [2024-12-07 10:40:51.207416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:52.113 [2024-12-07 10:40:51.207425] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:52.113 [2024-12-07 10:40:51.207437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:52.113 [2024-12-07 10:40:51.207459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:52.113 [2024-12-07 10:40:51.207469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:52.113 [2024-12-07 10:40:51.207480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:52.113 [2024-12-07 10:40:51.207490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:52.113 [2024-12-07 10:40:51.207500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:52.113 [2024-12-07 10:40:51.207510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:52.113 [2024-12-07 10:40:51.207520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:52.113 [2024-12-07 10:40:51.207530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:52.113 [2024-12-07 10:40:51.207540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:52.113 [2024-12-07 10:40:51.207589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:52.113 [2024-12-07 10:40:51.207600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:52.113 [2024-12-07 10:40:51.207621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:52.113 [2024-12-07 10:40:51.207633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:52.113 [2024-12-07 10:40:51.207643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:52.113 [2024-12-07 10:40:51.207653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.207663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:52.113 [2024-12-07 10:40:51.207673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:28:52.113 [2024-12-07 10:40:51.207683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.247185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.247222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:52.113 [2024-12-07 10:40:51.247236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.511 ms 00:28:52.113 [2024-12-07 10:40:51.247248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.247330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.247342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:52.113 [2024-12-07 10:40:51.247353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:52.113 [2024-12-07 10:40:51.247362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.305232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.305273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:52.113 [2024-12-07 10:40:51.305291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.903 ms 00:28:52.113 [2024-12-07 10:40:51.305301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.305345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.305356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:52.113 [2024-12-07 10:40:51.305367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:52.113 [2024-12-07 10:40:51.305376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.305891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.305905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:52.113 [2024-12-07 10:40:51.305916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:28:52.113 [2024-12-07 10:40:51.305932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.306080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.306094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:52.113 [2024-12-07 10:40:51.306105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:28:52.113 [2024-12-07 10:40:51.306115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.113 [2024-12-07 10:40:51.325384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.113 [2024-12-07 10:40:51.325419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:52.113 [2024-12-07 10:40:51.325432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.279 ms 00:28:52.113 [2024-12-07 10:40:51.325442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.114 [2024-12-07 10:40:51.344173] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:52.114 [2024-12-07 10:40:51.344215] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:52.114 [2024-12-07 10:40:51.344231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.114 [2024-12-07 10:40:51.344243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:52.114 [2024-12-07 10:40:51.344254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.715 ms 00:28:52.114 [2024-12-07 10:40:51.344265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.114 [2024-12-07 10:40:51.372980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.114 [2024-12-07 10:40:51.373018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:52.114 [2024-12-07 10:40:51.373032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.710 ms 00:28:52.114 [2024-12-07 10:40:51.373042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.114 [2024-12-07 10:40:51.390399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.114 [2024-12-07 10:40:51.390433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:52.114 [2024-12-07 10:40:51.390461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.324 ms 00:28:52.114 [2024-12-07 10:40:51.390471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.114 [2024-12-07 10:40:51.408303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.114 [2024-12-07 10:40:51.408336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:52.114 [2024-12-07 10:40:51.408364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:28:52.114 [2024-12-07 10:40:51.408374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.114 [2024-12-07 10:40:51.409158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.114 [2024-12-07 10:40:51.409182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:52.114 [2024-12-07 10:40:51.409194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:28:52.114 [2024-12-07 10:40:51.409205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.495388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.495453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:52.374 [2024-12-07 10:40:51.495486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.300 ms 00:28:52.374 [2024-12-07 10:40:51.495497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.506056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:52.374 [2024-12-07 10:40:51.509055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.509094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:52.374 [2024-12-07 10:40:51.509124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.508 ms 00:28:52.374 [2024-12-07 10:40:51.509140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.509233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.509247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:52.374 [2024-12-07 10:40:51.509258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:52.374 [2024-12-07 10:40:51.509268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.509366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.509379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:52.374 [2024-12-07 10:40:51.509390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:52.374 [2024-12-07 10:40:51.509399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.509445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.509456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:52.374 [2024-12-07 10:40:51.509467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:52.374 [2024-12-07 10:40:51.509476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.509512] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:52.374 [2024-12-07 10:40:51.509525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.509536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:52.374 [2024-12-07 10:40:51.509546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:52.374 [2024-12-07 10:40:51.509560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.545843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.545885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:52.374 [2024-12-07 10:40:51.545915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.321 ms 00:28:52.374 [2024-12-07 10:40:51.545926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.546011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.374 [2024-12-07 10:40:51.546024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:52.374 [2024-12-07 10:40:51.546034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:52.374 [2024-12-07 10:40:51.546046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.374 [2024-12-07 10:40:51.547169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.863 ms, result 0 00:28:53.310  [2024-12-07T10:40:53.597Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-07T10:40:54.973Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-07T10:40:55.908Z] Copying: 67/1024 [MB] (21 MBps) [2024-12-07T10:40:56.842Z] Copying: 90/1024 [MB] (23 MBps) [2024-12-07T10:40:57.776Z] Copying: 114/1024 [MB] (23 MBps) [2024-12-07T10:40:58.711Z] Copying: 138/1024 [MB] (23 MBps) [2024-12-07T10:40:59.646Z] Copying: 162/1024 [MB] (23 MBps) [2024-12-07T10:41:00.583Z] Copying: 186/1024 [MB] (24 MBps) [2024-12-07T10:41:01.957Z] Copying: 211/1024 [MB] (24 MBps) [2024-12-07T10:41:02.549Z] Copying: 235/1024 [MB] (24 MBps) [2024-12-07T10:41:03.927Z] Copying: 260/1024 [MB] (24 MBps) [2024-12-07T10:41:04.868Z] Copying: 285/1024 [MB] (24 MBps) [2024-12-07T10:41:05.803Z] Copying: 310/1024 [MB] (25 MBps) [2024-12-07T10:41:06.739Z] Copying: 335/1024 [MB] (24 MBps) [2024-12-07T10:41:07.676Z] Copying: 359/1024 [MB] (24 MBps) [2024-12-07T10:41:08.614Z] Copying: 384/1024 [MB] (24 MBps) [2024-12-07T10:41:09.551Z] Copying: 408/1024 [MB] (24 MBps) [2024-12-07T10:41:10.930Z] Copying: 431/1024 [MB] (23 MBps) [2024-12-07T10:41:11.868Z] Copying: 456/1024 [MB] (24 MBps) [2024-12-07T10:41:12.807Z] Copying: 481/1024 [MB] (24 MBps) [2024-12-07T10:41:13.744Z] Copying: 506/1024 [MB] (24 MBps) [2024-12-07T10:41:14.680Z] Copying: 531/1024 [MB] (25 MBps) [2024-12-07T10:41:15.616Z] Copying: 556/1024 [MB] (24 MBps) [2024-12-07T10:41:16.645Z] Copying: 580/1024 [MB] (24 MBps) [2024-12-07T10:41:17.590Z] Copying: 603/1024 [MB] (22 MBps) [2024-12-07T10:41:18.521Z] Copying: 626/1024 [MB] (23 MBps) [2024-12-07T10:41:19.894Z] Copying: 651/1024 [MB] (24 MBps) [2024-12-07T10:41:20.830Z] Copying: 675/1024 [MB] (24 MBps) [2024-12-07T10:41:21.766Z] Copying: 702/1024 [MB] (26 MBps) [2024-12-07T10:41:22.702Z] Copying: 726/1024 [MB] (24 MBps) [2024-12-07T10:41:23.638Z] Copying: 751/1024 [MB] (24 MBps) [2024-12-07T10:41:24.574Z] Copying: 775/1024 [MB] (23 MBps) [2024-12-07T10:41:25.508Z] Copying: 799/1024 [MB] (24 MBps) [2024-12-07T10:41:26.881Z] Copying: 823/1024 [MB] (23 MBps) [2024-12-07T10:41:27.817Z] Copying: 847/1024 [MB] (24 MBps) [2024-12-07T10:41:28.753Z] Copying: 872/1024 [MB] (24 MBps) [2024-12-07T10:41:29.690Z] Copying: 896/1024 [MB] (24 MBps) [2024-12-07T10:41:30.627Z] Copying: 920/1024 [MB] (23 MBps) [2024-12-07T10:41:31.568Z] Copying: 943/1024 [MB] (23 MBps) [2024-12-07T10:41:32.510Z] Copying: 966/1024 [MB] (23 MBps) [2024-12-07T10:41:33.887Z] Copying: 990/1024 [MB] (24 MBps) [2024-12-07T10:41:34.825Z] Copying: 1015/1024 [MB] (24 MBps) [2024-12-07T10:41:34.825Z] Copying: 1048524/1048576 [kB] (8568 kBps) [2024-12-07T10:41:34.825Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-07 10:41:34.546879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.472 [2024-12-07 10:41:34.546960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:35.472 [2024-12-07 10:41:34.546991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:35.472 [2024-12-07 10:41:34.547003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.472 [2024-12-07 10:41:34.549585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:35.472 [2024-12-07 10:41:34.554517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.472 [2024-12-07 10:41:34.554569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:35.472 [2024-12-07 10:41:34.554600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:29:35.472 [2024-12-07 10:41:34.554616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.472 [2024-12-07 10:41:34.565106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.472 [2024-12-07 10:41:34.565145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:35.472 [2024-12-07 10:41:34.565175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.799 ms 00:29:35.472 [2024-12-07 10:41:34.565185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.472 [2024-12-07 10:41:34.587991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.472 [2024-12-07 10:41:34.588033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:35.472 [2024-12-07 10:41:34.588048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.825 ms 00:29:35.472 [2024-12-07 10:41:34.588059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.472 [2024-12-07 10:41:34.592966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.472 [2024-12-07 10:41:34.593004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:35.472 [2024-12-07 10:41:34.593016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.851 ms 00:29:35.472 [2024-12-07 10:41:34.593025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.472 [2024-12-07 10:41:34.627474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.473 [2024-12-07 10:41:34.627509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:35.473 [2024-12-07 10:41:34.627538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.431 ms 00:29:35.473 [2024-12-07 10:41:34.627548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.473 [2024-12-07 10:41:34.647548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.473 [2024-12-07 10:41:34.647589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:35.473 [2024-12-07 10:41:34.647618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.994 ms 00:29:35.473 [2024-12-07 10:41:34.647628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.473 [2024-12-07 10:41:34.764400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.473 [2024-12-07 10:41:34.764455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:35.473 [2024-12-07 10:41:34.764476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.920 ms 00:29:35.473 [2024-12-07 10:41:34.764486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.473 [2024-12-07 10:41:34.799089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.473 [2024-12-07 10:41:34.799126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:35.473 [2024-12-07 10:41:34.799138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.642 ms 00:29:35.473 [2024-12-07 10:41:34.799183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.733 [2024-12-07 10:41:34.833555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.733 [2024-12-07 10:41:34.833595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:35.733 [2024-12-07 10:41:34.833607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.389 ms 00:29:35.733 [2024-12-07 10:41:34.833616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.733 [2024-12-07 10:41:34.869181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.733 [2024-12-07 10:41:34.869218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:35.733 [2024-12-07 10:41:34.869246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.568 ms 00:29:35.733 [2024-12-07 10:41:34.869255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.733 [2024-12-07 10:41:34.906245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.733 [2024-12-07 10:41:34.906280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:35.733 [2024-12-07 10:41:34.906292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.973 ms 00:29:35.733 [2024-12-07 10:41:34.906318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.733 [2024-12-07 10:41:34.906355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:35.733 [2024-12-07 10:41:34.906369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105216 / 261120 wr_cnt: 1 state: open 00:29:35.733 [2024-12-07 10:41:34.906382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.906980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:35.733 [2024-12-07 10:41:34.907153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:35.734 [2024-12-07 10:41:34.907481] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:35.734 [2024-12-07 10:41:34.907491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9faa9d3-da78-4248-a35f-f1e2330b4cd7 00:29:35.734 [2024-12-07 10:41:34.907519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105216 00:29:35.734 [2024-12-07 10:41:34.907529] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106176 00:29:35.734 [2024-12-07 10:41:34.907539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105216 00:29:35.734 [2024-12-07 10:41:34.907550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:29:35.734 [2024-12-07 10:41:34.907559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:35.734 [2024-12-07 10:41:34.907570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:35.734 [2024-12-07 10:41:34.907580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:35.734 [2024-12-07 10:41:34.907589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:35.734 [2024-12-07 10:41:34.907598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:35.734 [2024-12-07 10:41:34.907607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.734 [2024-12-07 10:41:34.907618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:35.734 [2024-12-07 10:41:34.907629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:29:35.734 [2024-12-07 10:41:34.907638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.734 [2024-12-07 10:41:34.927574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.734 [2024-12-07 10:41:34.927605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:35.734 [2024-12-07 10:41:34.927617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.934 ms 00:29:35.734 [2024-12-07 10:41:34.927627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.734 [2024-12-07 10:41:34.928168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.734 [2024-12-07 10:41:34.928190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:35.734 [2024-12-07 10:41:34.928205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:29:35.734 [2024-12-07 10:41:34.928215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.734 [2024-12-07 10:41:34.979396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.734 [2024-12-07 10:41:34.979432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:35.734 [2024-12-07 10:41:34.979446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.734 [2024-12-07 10:41:34.979456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.734 [2024-12-07 10:41:34.979514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.734 [2024-12-07 10:41:34.979532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:35.734 [2024-12-07 10:41:34.979549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.734 [2024-12-07 10:41:34.979559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.734 [2024-12-07 10:41:34.979642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.734 [2024-12-07 10:41:34.979660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:35.734 [2024-12-07 10:41:34.979671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.734 [2024-12-07 10:41:34.979681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.734 [2024-12-07 10:41:34.979698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.734 [2024-12-07 10:41:34.979711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:35.734 [2024-12-07 10:41:34.979722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.734 [2024-12-07 10:41:34.979731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.102945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.103005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:35.993 [2024-12-07 10:41:35.103019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.103046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.197696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.197749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:35.993 [2024-12-07 10:41:35.197763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.197778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.197897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.197910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:35.993 [2024-12-07 10:41:35.197921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.197931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.197968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.197979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:35.993 [2024-12-07 10:41:35.197989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.198020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.198149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.198163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:35.993 [2024-12-07 10:41:35.198174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.198184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.198223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.198235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:35.993 [2024-12-07 10:41:35.198245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.198255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.198296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.198312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:35.993 [2024-12-07 10:41:35.198323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.198333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.198376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:35.993 [2024-12-07 10:41:35.198391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:35.993 [2024-12-07 10:41:35.198401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:35.993 [2024-12-07 10:41:35.198412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.993 [2024-12-07 10:41:35.198536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.575 ms, result 0 00:29:37.895 00:29:37.895 00:29:37.895 10:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:39.273 10:41:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:39.273 [2024-12-07 10:41:38.613624] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:39.273 [2024-12-07 10:41:38.613754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82709 ] 00:29:39.532 [2024-12-07 10:41:38.793910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.791 [2024-12-07 10:41:38.900997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.050 [2024-12-07 10:41:39.242739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:40.050 [2024-12-07 10:41:39.242817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:40.311 [2024-12-07 10:41:39.404177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.404229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:40.311 [2024-12-07 10:41:39.404261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:40.311 [2024-12-07 10:41:39.404272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.404320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.404336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.311 [2024-12-07 10:41:39.404346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:40.311 [2024-12-07 10:41:39.404357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.404378] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:40.311 [2024-12-07 10:41:39.405330] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:40.311 [2024-12-07 10:41:39.405363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.405375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.311 [2024-12-07 10:41:39.405386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:29:40.311 [2024-12-07 10:41:39.405396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.406972] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:40.311 [2024-12-07 10:41:39.425267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.425304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:40.311 [2024-12-07 10:41:39.425334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.326 ms 00:29:40.311 [2024-12-07 10:41:39.425345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.425415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.425428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:40.311 [2024-12-07 10:41:39.425438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:40.311 [2024-12-07 10:41:39.425448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.432338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.432367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.311 [2024-12-07 10:41:39.432378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.831 ms 00:29:40.311 [2024-12-07 10:41:39.432392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.432486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.432500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.311 [2024-12-07 10:41:39.432513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:40.311 [2024-12-07 10:41:39.432522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.432563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.432575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:40.311 [2024-12-07 10:41:39.432586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:40.311 [2024-12-07 10:41:39.432595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.432622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:40.311 [2024-12-07 10:41:39.437261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.437291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.311 [2024-12-07 10:41:39.437323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.650 ms 00:29:40.311 [2024-12-07 10:41:39.437333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.437366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.437377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:40.311 [2024-12-07 10:41:39.437387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:40.311 [2024-12-07 10:41:39.437397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.311 [2024-12-07 10:41:39.437448] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:40.311 [2024-12-07 10:41:39.437473] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:40.311 [2024-12-07 10:41:39.437507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:40.311 [2024-12-07 10:41:39.437528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:40.311 [2024-12-07 10:41:39.437631] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:40.311 [2024-12-07 10:41:39.437645] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:40.311 [2024-12-07 10:41:39.437658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:40.311 [2024-12-07 10:41:39.437671] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:40.311 [2024-12-07 10:41:39.437683] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:40.311 [2024-12-07 10:41:39.437694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:40.311 [2024-12-07 10:41:39.437703] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:40.311 [2024-12-07 10:41:39.437717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:40.311 [2024-12-07 10:41:39.437727] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:40.311 [2024-12-07 10:41:39.437738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.311 [2024-12-07 10:41:39.437748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:40.311 [2024-12-07 10:41:39.437759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:29:40.312 [2024-12-07 10:41:39.437768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.312 [2024-12-07 10:41:39.437839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.312 [2024-12-07 10:41:39.437850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:40.312 [2024-12-07 10:41:39.437860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:40.312 [2024-12-07 10:41:39.437870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.312 [2024-12-07 10:41:39.437970] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:40.312 [2024-12-07 10:41:39.438011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:40.312 [2024-12-07 10:41:39.438022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:40.312 [2024-12-07 10:41:39.438052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:40.312 [2024-12-07 10:41:39.438081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:40.312 [2024-12-07 10:41:39.438100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:40.312 [2024-12-07 10:41:39.438109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:40.312 [2024-12-07 10:41:39.438118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:40.312 [2024-12-07 10:41:39.438138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:40.312 [2024-12-07 10:41:39.438148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:40.312 [2024-12-07 10:41:39.438157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:40.312 [2024-12-07 10:41:39.438175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:40.312 [2024-12-07 10:41:39.438203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:40.312 [2024-12-07 10:41:39.438231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:40.312 [2024-12-07 10:41:39.438258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:40.312 [2024-12-07 10:41:39.438285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:40.312 [2024-12-07 10:41:39.438312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:40.312 [2024-12-07 10:41:39.438330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:40.312 [2024-12-07 10:41:39.438339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:40.312 [2024-12-07 10:41:39.438349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:40.312 [2024-12-07 10:41:39.438357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:40.312 [2024-12-07 10:41:39.438367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:40.312 [2024-12-07 10:41:39.438376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:40.312 [2024-12-07 10:41:39.438394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:40.312 [2024-12-07 10:41:39.438404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438413] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:40.312 [2024-12-07 10:41:39.438422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:40.312 [2024-12-07 10:41:39.438432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.312 [2024-12-07 10:41:39.438452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:40.312 [2024-12-07 10:41:39.438462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:40.312 [2024-12-07 10:41:39.438471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:40.312 [2024-12-07 10:41:39.438480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:40.312 [2024-12-07 10:41:39.438488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:40.312 [2024-12-07 10:41:39.438498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:40.312 [2024-12-07 10:41:39.438509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:40.312 [2024-12-07 10:41:39.438522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:40.312 [2024-12-07 10:41:39.438548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:40.312 [2024-12-07 10:41:39.438558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:40.312 [2024-12-07 10:41:39.438568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:40.312 [2024-12-07 10:41:39.438578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:40.312 [2024-12-07 10:41:39.438589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:40.312 [2024-12-07 10:41:39.438599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:40.312 [2024-12-07 10:41:39.438609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:40.312 [2024-12-07 10:41:39.438619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:40.312 [2024-12-07 10:41:39.438630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:40.312 [2024-12-07 10:41:39.438695] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:40.312 [2024-12-07 10:41:39.438706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:40.312 [2024-12-07 10:41:39.438728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:40.313 [2024-12-07 10:41:39.438738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:40.313 [2024-12-07 10:41:39.438749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:40.313 [2024-12-07 10:41:39.438760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.438772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:40.313 [2024-12-07 10:41:39.438781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:29:40.313 [2024-12-07 10:41:39.438792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.477911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.477947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.313 [2024-12-07 10:41:39.477976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.131 ms 00:29:40.313 [2024-12-07 10:41:39.478010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.478085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.478097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:40.313 [2024-12-07 10:41:39.478107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:40.313 [2024-12-07 10:41:39.478117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.535463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.535497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.313 [2024-12-07 10:41:39.535527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.381 ms 00:29:40.313 [2024-12-07 10:41:39.535538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.535571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.535582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.313 [2024-12-07 10:41:39.535597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:40.313 [2024-12-07 10:41:39.535607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.536114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.536136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.313 [2024-12-07 10:41:39.536148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:29:40.313 [2024-12-07 10:41:39.536158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.536277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.536290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.313 [2024-12-07 10:41:39.536307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:40.313 [2024-12-07 10:41:39.536317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.554783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.554820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.313 [2024-12-07 10:41:39.554848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.474 ms 00:29:40.313 [2024-12-07 10:41:39.554858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.573282] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:40.313 [2024-12-07 10:41:39.573328] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:40.313 [2024-12-07 10:41:39.573342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.573353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:40.313 [2024-12-07 10:41:39.573363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.417 ms 00:29:40.313 [2024-12-07 10:41:39.573389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.601562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.601598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:40.313 [2024-12-07 10:41:39.601611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.176 ms 00:29:40.313 [2024-12-07 10:41:39.601621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.618823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.618858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:40.313 [2024-12-07 10:41:39.618886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.172 ms 00:29:40.313 [2024-12-07 10:41:39.618896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.636349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.636383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:40.313 [2024-12-07 10:41:39.636395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.442 ms 00:29:40.313 [2024-12-07 10:41:39.636404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.313 [2024-12-07 10:41:39.637211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.313 [2024-12-07 10:41:39.637243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:40.313 [2024-12-07 10:41:39.637259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:29:40.313 [2024-12-07 10:41:39.637269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.719531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.719593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:40.573 [2024-12-07 10:41:39.719615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.373 ms 00:29:40.573 [2024-12-07 10:41:39.719641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.729692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:40.573 [2024-12-07 10:41:39.732029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.732058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:40.573 [2024-12-07 10:41:39.732071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.362 ms 00:29:40.573 [2024-12-07 10:41:39.732096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.732173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.732186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:40.573 [2024-12-07 10:41:39.732202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:40.573 [2024-12-07 10:41:39.732212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.733714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.733751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:40.573 [2024-12-07 10:41:39.733779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:29:40.573 [2024-12-07 10:41:39.733790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.733819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.733831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:40.573 [2024-12-07 10:41:39.733841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:40.573 [2024-12-07 10:41:39.733851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.733897] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:40.573 [2024-12-07 10:41:39.733910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.733920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:40.573 [2024-12-07 10:41:39.733930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:40.573 [2024-12-07 10:41:39.733941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.770825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.770865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:40.573 [2024-12-07 10:41:39.770886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.923 ms 00:29:40.573 [2024-12-07 10:41:39.770897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.770971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.573 [2024-12-07 10:41:39.770993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:40.573 [2024-12-07 10:41:39.771004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:40.573 [2024-12-07 10:41:39.771014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.573 [2024-12-07 10:41:39.772194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.135 ms, result 0 00:29:41.951  [2024-12-07T10:41:42.238Z] Copying: 1216/1048576 [kB] (1216 kBps) [2024-12-07T10:41:43.173Z] Copying: 9632/1048576 [kB] (8416 kBps) [2024-12-07T10:41:44.107Z] Copying: 42/1024 [MB] (32 MBps) [2024-12-07T10:41:45.042Z] Copying: 75/1024 [MB] (33 MBps) [2024-12-07T10:41:45.981Z] Copying: 109/1024 [MB] (33 MBps) [2024-12-07T10:41:47.357Z] Copying: 146/1024 [MB] (37 MBps) [2024-12-07T10:41:47.990Z] Copying: 180/1024 [MB] (33 MBps) [2024-12-07T10:41:49.371Z] Copying: 212/1024 [MB] (31 MBps) [2024-12-07T10:41:50.310Z] Copying: 242/1024 [MB] (30 MBps) [2024-12-07T10:41:51.247Z] Copying: 272/1024 [MB] (29 MBps) [2024-12-07T10:41:52.186Z] Copying: 307/1024 [MB] (35 MBps) [2024-12-07T10:41:53.126Z] Copying: 340/1024 [MB] (32 MBps) [2024-12-07T10:41:54.065Z] Copying: 372/1024 [MB] (31 MBps) [2024-12-07T10:41:55.003Z] Copying: 404/1024 [MB] (32 MBps) [2024-12-07T10:41:56.382Z] Copying: 437/1024 [MB] (32 MBps) [2024-12-07T10:41:57.319Z] Copying: 470/1024 [MB] (33 MBps) [2024-12-07T10:41:58.256Z] Copying: 503/1024 [MB] (32 MBps) [2024-12-07T10:41:59.193Z] Copying: 536/1024 [MB] (32 MBps) [2024-12-07T10:42:00.130Z] Copying: 568/1024 [MB] (32 MBps) [2024-12-07T10:42:01.069Z] Copying: 601/1024 [MB] (32 MBps) [2024-12-07T10:42:02.006Z] Copying: 632/1024 [MB] (31 MBps) [2024-12-07T10:42:03.384Z] Copying: 664/1024 [MB] (31 MBps) [2024-12-07T10:42:03.952Z] Copying: 697/1024 [MB] (33 MBps) [2024-12-07T10:42:05.330Z] Copying: 730/1024 [MB] (33 MBps) [2024-12-07T10:42:06.265Z] Copying: 763/1024 [MB] (33 MBps) [2024-12-07T10:42:07.216Z] Copying: 797/1024 [MB] (33 MBps) [2024-12-07T10:42:08.155Z] Copying: 830/1024 [MB] (33 MBps) [2024-12-07T10:42:09.093Z] Copying: 863/1024 [MB] (33 MBps) [2024-12-07T10:42:10.031Z] Copying: 896/1024 [MB] (33 MBps) [2024-12-07T10:42:10.969Z] Copying: 929/1024 [MB] (33 MBps) [2024-12-07T10:42:12.347Z] Copying: 962/1024 [MB] (33 MBps) [2024-12-07T10:42:12.916Z] Copying: 995/1024 [MB] (33 MBps) [2024-12-07T10:42:13.488Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-12-07 10:42:13.221927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.222102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:14.135 [2024-12-07 10:42:13.222123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:14.135 [2024-12-07 10:42:13.222136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.222165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:14.135 [2024-12-07 10:42:13.227693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.227735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:14.135 [2024-12-07 10:42:13.227751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.516 ms 00:30:14.135 [2024-12-07 10:42:13.227762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.228007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.228028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:14.135 [2024-12-07 10:42:13.228039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:30:14.135 [2024-12-07 10:42:13.228049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.239764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.239810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:14.135 [2024-12-07 10:42:13.239826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.705 ms 00:30:14.135 [2024-12-07 10:42:13.239838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.244853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.244886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:14.135 [2024-12-07 10:42:13.244904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:30:14.135 [2024-12-07 10:42:13.244915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.279683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.279721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:14.135 [2024-12-07 10:42:13.279750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.748 ms 00:30:14.135 [2024-12-07 10:42:13.279760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.299848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.299884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:14.135 [2024-12-07 10:42:13.299897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.081 ms 00:30:14.135 [2024-12-07 10:42:13.299906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.302170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.302205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:14.135 [2024-12-07 10:42:13.302218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:30:14.135 [2024-12-07 10:42:13.302235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.336876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.336910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:14.135 [2024-12-07 10:42:13.336922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.679 ms 00:30:14.135 [2024-12-07 10:42:13.336932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.371255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.371290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:14.135 [2024-12-07 10:42:13.371302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.318 ms 00:30:14.135 [2024-12-07 10:42:13.371311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.405173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.405211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:14.135 [2024-12-07 10:42:13.405223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.865 ms 00:30:14.135 [2024-12-07 10:42:13.405232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.438417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.135 [2024-12-07 10:42:13.438451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:14.135 [2024-12-07 10:42:13.438463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.149 ms 00:30:14.135 [2024-12-07 10:42:13.438472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.135 [2024-12-07 10:42:13.438524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:14.135 [2024-12-07 10:42:13.438540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:14.135 [2024-12-07 10:42:13.438552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:14.135 [2024-12-07 10:42:13.438563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:14.135 [2024-12-07 10:42:13.438980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:14.136 [2024-12-07 10:42:13.439613] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:14.136 [2024-12-07 10:42:13.439623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9faa9d3-da78-4248-a35f-f1e2330b4cd7 00:30:14.136 [2024-12-07 10:42:13.439634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:14.136 [2024-12-07 10:42:13.439644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159424 00:30:14.136 [2024-12-07 10:42:13.439658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157440 00:30:14.136 [2024-12-07 10:42:13.439668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:30:14.136 [2024-12-07 10:42:13.439678] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:14.136 [2024-12-07 10:42:13.439698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:14.136 [2024-12-07 10:42:13.439708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:14.136 [2024-12-07 10:42:13.439717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:14.136 [2024-12-07 10:42:13.439727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:14.136 [2024-12-07 10:42:13.439736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.136 [2024-12-07 10:42:13.439747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:14.136 [2024-12-07 10:42:13.439757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:30:14.136 [2024-12-07 10:42:13.439767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.136 [2024-12-07 10:42:13.458812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.136 [2024-12-07 10:42:13.458846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:14.136 [2024-12-07 10:42:13.458875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.040 ms 00:30:14.136 [2024-12-07 10:42:13.458884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.136 [2024-12-07 10:42:13.459467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.136 [2024-12-07 10:42:13.459486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:14.136 [2024-12-07 10:42:13.459497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:30:14.136 [2024-12-07 10:42:13.459508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.396 [2024-12-07 10:42:13.509965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.396 [2024-12-07 10:42:13.510006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:14.396 [2024-12-07 10:42:13.510018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.396 [2024-12-07 10:42:13.510028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.396 [2024-12-07 10:42:13.510075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.396 [2024-12-07 10:42:13.510086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:14.396 [2024-12-07 10:42:13.510095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.396 [2024-12-07 10:42:13.510105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.396 [2024-12-07 10:42:13.510170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.396 [2024-12-07 10:42:13.510183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:14.396 [2024-12-07 10:42:13.510193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.396 [2024-12-07 10:42:13.510202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.396 [2024-12-07 10:42:13.510218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.396 [2024-12-07 10:42:13.510228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:14.396 [2024-12-07 10:42:13.510238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.510248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.626484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.626540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:14.397 [2024-12-07 10:42:13.626555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.626565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:14.397 [2024-12-07 10:42:13.724250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:14.397 [2024-12-07 10:42:13.724375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:14.397 [2024-12-07 10:42:13.724445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:14.397 [2024-12-07 10:42:13.724586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:14.397 [2024-12-07 10:42:13.724671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:14.397 [2024-12-07 10:42:13.724747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:14.397 [2024-12-07 10:42:13.724814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:14.397 [2024-12-07 10:42:13.724824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:14.397 [2024-12-07 10:42:13.724834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.397 [2024-12-07 10:42:13.724957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.830 ms, result 0 00:30:15.777 00:30:15.777 00:30:15.777 10:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:17.151 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:17.151 10:42:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:17.151 [2024-12-07 10:42:16.458372] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:17.151 [2024-12-07 10:42:16.458480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83095 ] 00:30:17.410 [2024-12-07 10:42:16.635380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.410 [2024-12-07 10:42:16.742639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.979 [2024-12-07 10:42:17.102868] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:17.979 [2024-12-07 10:42:17.102935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:17.979 [2024-12-07 10:42:17.263491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.263538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:17.979 [2024-12-07 10:42:17.263553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:17.979 [2024-12-07 10:42:17.263563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.263606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.263621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:17.979 [2024-12-07 10:42:17.263631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:17.979 [2024-12-07 10:42:17.263641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.263660] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:17.979 [2024-12-07 10:42:17.264555] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:17.979 [2024-12-07 10:42:17.264583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.264594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:17.979 [2024-12-07 10:42:17.264605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:30:17.979 [2024-12-07 10:42:17.264615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.266075] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:17.979 [2024-12-07 10:42:17.284001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.284035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:17.979 [2024-12-07 10:42:17.284049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.956 ms 00:30:17.979 [2024-12-07 10:42:17.284059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.284131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.284143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:17.979 [2024-12-07 10:42:17.284154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:17.979 [2024-12-07 10:42:17.284164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.290991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.291016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:17.979 [2024-12-07 10:42:17.291028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.770 ms 00:30:17.979 [2024-12-07 10:42:17.291041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.291118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.291130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:17.979 [2024-12-07 10:42:17.291140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:17.979 [2024-12-07 10:42:17.291150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.291187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.291199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:17.979 [2024-12-07 10:42:17.291208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:17.979 [2024-12-07 10:42:17.291218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.291243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:17.979 [2024-12-07 10:42:17.296166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.296197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:17.979 [2024-12-07 10:42:17.296229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:30:17.979 [2024-12-07 10:42:17.296240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.296275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.979 [2024-12-07 10:42:17.296287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:17.979 [2024-12-07 10:42:17.296299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:17.979 [2024-12-07 10:42:17.296310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.979 [2024-12-07 10:42:17.296364] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:17.979 [2024-12-07 10:42:17.296390] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:17.979 [2024-12-07 10:42:17.296437] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:17.979 [2024-12-07 10:42:17.296461] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:17.979 [2024-12-07 10:42:17.296566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:17.979 [2024-12-07 10:42:17.296581] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:17.979 [2024-12-07 10:42:17.296595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:17.979 [2024-12-07 10:42:17.296608] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:17.979 [2024-12-07 10:42:17.296621] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:17.979 [2024-12-07 10:42:17.296633] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:17.979 [2024-12-07 10:42:17.296645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:17.979 [2024-12-07 10:42:17.296659] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:17.980 [2024-12-07 10:42:17.296669] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:17.980 [2024-12-07 10:42:17.296679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.980 [2024-12-07 10:42:17.296691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:17.980 [2024-12-07 10:42:17.296702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:30:17.980 [2024-12-07 10:42:17.296712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.980 [2024-12-07 10:42:17.296783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.980 [2024-12-07 10:42:17.296796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:17.980 [2024-12-07 10:42:17.296806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:17.980 [2024-12-07 10:42:17.296816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.980 [2024-12-07 10:42:17.296912] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:17.980 [2024-12-07 10:42:17.296928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:17.980 [2024-12-07 10:42:17.296939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:17.980 [2024-12-07 10:42:17.296949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.296959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:17.980 [2024-12-07 10:42:17.296969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.296979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:17.980 [2024-12-07 10:42:17.296990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:17.980 [2024-12-07 10:42:17.297000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:17.980 [2024-12-07 10:42:17.297032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:17.980 [2024-12-07 10:42:17.297042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:17.980 [2024-12-07 10:42:17.297051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:17.980 [2024-12-07 10:42:17.297070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:17.980 [2024-12-07 10:42:17.297083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:17.980 [2024-12-07 10:42:17.297092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:17.980 [2024-12-07 10:42:17.297111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:17.980 [2024-12-07 10:42:17.297139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:17.980 [2024-12-07 10:42:17.297169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:17.980 [2024-12-07 10:42:17.297196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:17.980 [2024-12-07 10:42:17.297223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:17.980 [2024-12-07 10:42:17.297251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:17.980 [2024-12-07 10:42:17.297268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:17.980 [2024-12-07 10:42:17.297277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:17.980 [2024-12-07 10:42:17.297286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:17.980 [2024-12-07 10:42:17.297295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:17.980 [2024-12-07 10:42:17.297304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:17.980 [2024-12-07 10:42:17.297313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:17.980 [2024-12-07 10:42:17.297331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:17.980 [2024-12-07 10:42:17.297340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297348] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:17.980 [2024-12-07 10:42:17.297358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:17.980 [2024-12-07 10:42:17.297368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:17.980 [2024-12-07 10:42:17.297386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:17.980 [2024-12-07 10:42:17.297395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:17.980 [2024-12-07 10:42:17.297404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:17.980 [2024-12-07 10:42:17.297413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:17.980 [2024-12-07 10:42:17.297422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:17.980 [2024-12-07 10:42:17.297430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:17.980 [2024-12-07 10:42:17.297441] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:17.980 [2024-12-07 10:42:17.297453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:17.980 [2024-12-07 10:42:17.297480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:17.980 [2024-12-07 10:42:17.297491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:17.980 [2024-12-07 10:42:17.297501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:17.980 [2024-12-07 10:42:17.297513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:17.980 [2024-12-07 10:42:17.297523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:17.980 [2024-12-07 10:42:17.297533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:17.980 [2024-12-07 10:42:17.297543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:17.980 [2024-12-07 10:42:17.297554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:17.980 [2024-12-07 10:42:17.297563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:17.980 [2024-12-07 10:42:17.297612] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:17.980 [2024-12-07 10:42:17.297623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:17.980 [2024-12-07 10:42:17.297644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:17.980 [2024-12-07 10:42:17.297656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:17.980 [2024-12-07 10:42:17.297666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:17.980 [2024-12-07 10:42:17.297677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.980 [2024-12-07 10:42:17.297688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:17.980 [2024-12-07 10:42:17.297698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:30:17.980 [2024-12-07 10:42:17.297707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.240 [2024-12-07 10:42:17.338301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.240 [2024-12-07 10:42:17.338338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:18.240 [2024-12-07 10:42:17.338351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.612 ms 00:30:18.240 [2024-12-07 10:42:17.338366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.240 [2024-12-07 10:42:17.338444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.240 [2024-12-07 10:42:17.338456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:18.240 [2024-12-07 10:42:17.338466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:18.240 [2024-12-07 10:42:17.338477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.240 [2024-12-07 10:42:17.397792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.240 [2024-12-07 10:42:17.397838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:18.240 [2024-12-07 10:42:17.397851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.347 ms 00:30:18.240 [2024-12-07 10:42:17.397861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.240 [2024-12-07 10:42:17.397900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.397911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:18.241 [2024-12-07 10:42:17.397926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:18.241 [2024-12-07 10:42:17.397935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.398438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.398460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:18.241 [2024-12-07 10:42:17.398472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:30:18.241 [2024-12-07 10:42:17.398483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.398610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.398624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:18.241 [2024-12-07 10:42:17.398641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:18.241 [2024-12-07 10:42:17.398661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.416944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.416985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:18.241 [2024-12-07 10:42:17.416998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.291 ms 00:30:18.241 [2024-12-07 10:42:17.417009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.435203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:18.241 [2024-12-07 10:42:17.435240] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:18.241 [2024-12-07 10:42:17.435255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.435266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:18.241 [2024-12-07 10:42:17.435278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.174 ms 00:30:18.241 [2024-12-07 10:42:17.435288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.463313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.463347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:18.241 [2024-12-07 10:42:17.463360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.025 ms 00:30:18.241 [2024-12-07 10:42:17.463370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.481018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.481052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:18.241 [2024-12-07 10:42:17.481065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.615 ms 00:30:18.241 [2024-12-07 10:42:17.481074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.498426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.498457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:18.241 [2024-12-07 10:42:17.498469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.342 ms 00:30:18.241 [2024-12-07 10:42:17.498478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.499252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.499273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:18.241 [2024-12-07 10:42:17.499306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:30:18.241 [2024-12-07 10:42:17.499317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.241 [2024-12-07 10:42:17.581803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.241 [2024-12-07 10:42:17.581851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:18.241 [2024-12-07 10:42:17.581874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.596 ms 00:30:18.241 [2024-12-07 10:42:17.581884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.592669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:18.500 [2024-12-07 10:42:17.595081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.595125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:18.500 [2024-12-07 10:42:17.595138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.172 ms 00:30:18.500 [2024-12-07 10:42:17.595159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.595239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.595270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:18.500 [2024-12-07 10:42:17.595286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:18.500 [2024-12-07 10:42:17.595296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.596187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.596208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:18.500 [2024-12-07 10:42:17.596220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:30:18.500 [2024-12-07 10:42:17.596230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.596256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.596267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:18.500 [2024-12-07 10:42:17.596277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:18.500 [2024-12-07 10:42:17.596287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.596343] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:18.500 [2024-12-07 10:42:17.596356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.596367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:18.500 [2024-12-07 10:42:17.596378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:18.500 [2024-12-07 10:42:17.596388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.631379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.631414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:18.500 [2024-12-07 10:42:17.631434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.027 ms 00:30:18.500 [2024-12-07 10:42:17.631444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.631513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.500 [2024-12-07 10:42:17.631525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:18.500 [2024-12-07 10:42:17.631536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:18.500 [2024-12-07 10:42:17.631545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.500 [2024-12-07 10:42:17.632630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 369.296 ms, result 0 00:30:19.925  [2024-12-07T10:42:19.853Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-07T10:42:21.231Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-07T10:42:22.167Z] Copying: 79/1024 [MB] (26 MBps) [2024-12-07T10:42:23.102Z] Copying: 105/1024 [MB] (26 MBps) [2024-12-07T10:42:24.038Z] Copying: 131/1024 [MB] (26 MBps) [2024-12-07T10:42:24.976Z] Copying: 157/1024 [MB] (25 MBps) [2024-12-07T10:42:25.912Z] Copying: 183/1024 [MB] (25 MBps) [2024-12-07T10:42:26.848Z] Copying: 209/1024 [MB] (26 MBps) [2024-12-07T10:42:28.225Z] Copying: 236/1024 [MB] (26 MBps) [2024-12-07T10:42:29.162Z] Copying: 262/1024 [MB] (26 MBps) [2024-12-07T10:42:30.101Z] Copying: 288/1024 [MB] (25 MBps) [2024-12-07T10:42:31.037Z] Copying: 314/1024 [MB] (25 MBps) [2024-12-07T10:42:31.973Z] Copying: 340/1024 [MB] (26 MBps) [2024-12-07T10:42:32.909Z] Copying: 366/1024 [MB] (26 MBps) [2024-12-07T10:42:33.844Z] Copying: 392/1024 [MB] (25 MBps) [2024-12-07T10:42:35.220Z] Copying: 417/1024 [MB] (24 MBps) [2024-12-07T10:42:36.155Z] Copying: 443/1024 [MB] (25 MBps) [2024-12-07T10:42:37.089Z] Copying: 469/1024 [MB] (26 MBps) [2024-12-07T10:42:38.022Z] Copying: 494/1024 [MB] (24 MBps) [2024-12-07T10:42:38.959Z] Copying: 518/1024 [MB] (24 MBps) [2024-12-07T10:42:39.917Z] Copying: 542/1024 [MB] (24 MBps) [2024-12-07T10:42:40.855Z] Copying: 566/1024 [MB] (23 MBps) [2024-12-07T10:42:42.233Z] Copying: 590/1024 [MB] (24 MBps) [2024-12-07T10:42:42.802Z] Copying: 614/1024 [MB] (23 MBps) [2024-12-07T10:42:44.200Z] Copying: 638/1024 [MB] (24 MBps) [2024-12-07T10:42:45.135Z] Copying: 662/1024 [MB] (24 MBps) [2024-12-07T10:42:46.071Z] Copying: 686/1024 [MB] (24 MBps) [2024-12-07T10:42:47.009Z] Copying: 710/1024 [MB] (24 MBps) [2024-12-07T10:42:47.944Z] Copying: 734/1024 [MB] (23 MBps) [2024-12-07T10:42:48.881Z] Copying: 759/1024 [MB] (24 MBps) [2024-12-07T10:42:49.818Z] Copying: 784/1024 [MB] (24 MBps) [2024-12-07T10:42:50.851Z] Copying: 808/1024 [MB] (24 MBps) [2024-12-07T10:42:51.792Z] Copying: 835/1024 [MB] (26 MBps) [2024-12-07T10:42:53.170Z] Copying: 861/1024 [MB] (26 MBps) [2024-12-07T10:42:54.107Z] Copying: 887/1024 [MB] (26 MBps) [2024-12-07T10:42:55.043Z] Copying: 913/1024 [MB] (26 MBps) [2024-12-07T10:42:55.977Z] Copying: 940/1024 [MB] (26 MBps) [2024-12-07T10:42:56.910Z] Copying: 966/1024 [MB] (25 MBps) [2024-12-07T10:42:57.845Z] Copying: 989/1024 [MB] (23 MBps) [2024-12-07T10:42:58.413Z] Copying: 1012/1024 [MB] (23 MBps) [2024-12-07T10:42:58.413Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-07 10:42:58.292226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.292365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:59.060 [2024-12-07 10:42:58.292422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:59.060 [2024-12-07 10:42:58.292462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.292544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:59.060 [2024-12-07 10:42:58.306479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.306580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:59.060 [2024-12-07 10:42:58.306613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.895 ms 00:30:59.060 [2024-12-07 10:42:58.306640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.307252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.307310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:59.060 [2024-12-07 10:42:58.307339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:30:59.060 [2024-12-07 10:42:58.307365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.313014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.313050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:59.060 [2024-12-07 10:42:58.313069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:30:59.060 [2024-12-07 10:42:58.313095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.320234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.320284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:59.060 [2024-12-07 10:42:58.320303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.122 ms 00:30:59.060 [2024-12-07 10:42:58.320320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.356871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.356912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:59.060 [2024-12-07 10:42:58.356928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.520 ms 00:30:59.060 [2024-12-07 10:42:58.356940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.377715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.377757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:59.060 [2024-12-07 10:42:58.377772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.758 ms 00:30:59.060 [2024-12-07 10:42:58.377784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.060 [2024-12-07 10:42:58.379959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.060 [2024-12-07 10:42:58.380011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:59.060 [2024-12-07 10:42:58.380025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:30:59.060 [2024-12-07 10:42:58.380037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.321 [2024-12-07 10:42:58.414234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.321 [2024-12-07 10:42:58.414275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:59.321 [2024-12-07 10:42:58.414290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.233 ms 00:30:59.321 [2024-12-07 10:42:58.414301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.321 [2024-12-07 10:42:58.448662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.321 [2024-12-07 10:42:58.448710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:59.321 [2024-12-07 10:42:58.448724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.375 ms 00:30:59.321 [2024-12-07 10:42:58.448734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.321 [2024-12-07 10:42:58.483101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.321 [2024-12-07 10:42:58.483141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:59.321 [2024-12-07 10:42:58.483156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.381 ms 00:30:59.321 [2024-12-07 10:42:58.483168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.321 [2024-12-07 10:42:58.516837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.321 [2024-12-07 10:42:58.516877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:59.321 [2024-12-07 10:42:58.516891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.639 ms 00:30:59.321 [2024-12-07 10:42:58.516901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.321 [2024-12-07 10:42:58.516942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:59.321 [2024-12-07 10:42:58.516970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:59.321 [2024-12-07 10:42:58.517000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:59.321 [2024-12-07 10:42:58.517013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:59.321 [2024-12-07 10:42:58.517550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.517992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:59.322 [2024-12-07 10:42:58.518153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:59.322 [2024-12-07 10:42:58.518164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9faa9d3-da78-4248-a35f-f1e2330b4cd7 00:30:59.322 [2024-12-07 10:42:58.518176] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:59.322 [2024-12-07 10:42:58.518187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:59.322 [2024-12-07 10:42:58.518197] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:59.322 [2024-12-07 10:42:58.518209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:59.322 [2024-12-07 10:42:58.518234] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:59.322 [2024-12-07 10:42:58.518245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:59.322 [2024-12-07 10:42:58.518257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:59.322 [2024-12-07 10:42:58.518266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:59.322 [2024-12-07 10:42:58.518276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:59.322 [2024-12-07 10:42:58.518287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.322 [2024-12-07 10:42:58.518298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:59.322 [2024-12-07 10:42:58.518311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:30:59.322 [2024-12-07 10:42:58.518327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.322 [2024-12-07 10:42:58.538013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.322 [2024-12-07 10:42:58.538049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:59.322 [2024-12-07 10:42:58.538064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.680 ms 00:30:59.322 [2024-12-07 10:42:58.538075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.322 [2024-12-07 10:42:58.538642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.322 [2024-12-07 10:42:58.538688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:59.322 [2024-12-07 10:42:58.538701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:30:59.322 [2024-12-07 10:42:58.538713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.322 [2024-12-07 10:42:58.590677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.322 [2024-12-07 10:42:58.590716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.323 [2024-12-07 10:42:58.590731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.323 [2024-12-07 10:42:58.590744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.323 [2024-12-07 10:42:58.590803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.323 [2024-12-07 10:42:58.590825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.323 [2024-12-07 10:42:58.590838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.323 [2024-12-07 10:42:58.590851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.323 [2024-12-07 10:42:58.590924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.323 [2024-12-07 10:42:58.590940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.323 [2024-12-07 10:42:58.590968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.323 [2024-12-07 10:42:58.590993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.323 [2024-12-07 10:42:58.591014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.323 [2024-12-07 10:42:58.591027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.323 [2024-12-07 10:42:58.591047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.323 [2024-12-07 10:42:58.591058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.716500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.716555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.582 [2024-12-07 10:42:58.716574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.716587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.815851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.815915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.582 [2024-12-07 10:42:58.815932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.815946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.816089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.816105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.582 [2024-12-07 10:42:58.816118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.816130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.816176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.816190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.582 [2024-12-07 10:42:58.816203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.816221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.816360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.816376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.582 [2024-12-07 10:42:58.816390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.816402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.816447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.816461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:59.582 [2024-12-07 10:42:58.816474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.816485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.816543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.816558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.582 [2024-12-07 10:42:58.816570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.582 [2024-12-07 10:42:58.816583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.582 [2024-12-07 10:42:58.816658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.582 [2024-12-07 10:42:58.816680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.583 [2024-12-07 10:42:58.816694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.583 [2024-12-07 10:42:58.816713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.583 [2024-12-07 10:42:58.816869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.491 ms, result 0 00:31:00.961 00:31:00.961 00:31:00.961 10:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:02.337 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:02.337 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:02.337 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:02.337 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:02.337 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81208 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81208 ']' 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81208 00:31:02.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81208) - No such process 00:31:02.597 Process with pid 81208 is not found 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81208 is not found' 00:31:02.597 10:43:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:02.872 Remove shared memory files 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:02.872 00:31:02.872 real 3m46.063s 00:31:02.872 user 4m16.349s 00:31:02.872 sys 0m42.015s 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.872 10:43:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:02.872 ************************************ 00:31:02.872 END TEST ftl_dirty_shutdown 00:31:02.872 ************************************ 00:31:02.872 10:43:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:02.872 10:43:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:02.872 10:43:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.872 10:43:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:03.131 ************************************ 00:31:03.131 START TEST ftl_upgrade_shutdown 00:31:03.131 ************************************ 00:31:03.131 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:03.132 * Looking for test storage... 00:31:03.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:03.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.132 --rc genhtml_branch_coverage=1 00:31:03.132 --rc genhtml_function_coverage=1 00:31:03.132 --rc genhtml_legend=1 00:31:03.132 --rc geninfo_all_blocks=1 00:31:03.132 --rc geninfo_unexecuted_blocks=1 00:31:03.132 00:31:03.132 ' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:03.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.132 --rc genhtml_branch_coverage=1 00:31:03.132 --rc genhtml_function_coverage=1 00:31:03.132 --rc genhtml_legend=1 00:31:03.132 --rc geninfo_all_blocks=1 00:31:03.132 --rc geninfo_unexecuted_blocks=1 00:31:03.132 00:31:03.132 ' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:03.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.132 --rc genhtml_branch_coverage=1 00:31:03.132 --rc genhtml_function_coverage=1 00:31:03.132 --rc genhtml_legend=1 00:31:03.132 --rc geninfo_all_blocks=1 00:31:03.132 --rc geninfo_unexecuted_blocks=1 00:31:03.132 00:31:03.132 ' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:03.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.132 --rc genhtml_branch_coverage=1 00:31:03.132 --rc genhtml_function_coverage=1 00:31:03.132 --rc genhtml_legend=1 00:31:03.132 --rc geninfo_all_blocks=1 00:31:03.132 --rc geninfo_unexecuted_blocks=1 00:31:03.132 00:31:03.132 ' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.132 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83618 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83618 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83618 ']' 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.391 10:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:03.391 [2024-12-07 10:43:02.602090] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:03.391 [2024-12-07 10:43:02.602228] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83618 ] 00:31:03.650 [2024-12-07 10:43:02.783168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.650 [2024-12-07 10:43:02.914215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:04.585 10:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:04.895 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:05.154 { 00:31:05.154 "name": "basen1", 00:31:05.154 "aliases": [ 00:31:05.154 "987ed648-7573-4f0b-b726-2a448b3242ca" 00:31:05.154 ], 00:31:05.154 "product_name": "NVMe disk", 00:31:05.154 "block_size": 4096, 00:31:05.154 "num_blocks": 1310720, 00:31:05.154 "uuid": "987ed648-7573-4f0b-b726-2a448b3242ca", 00:31:05.154 "numa_id": -1, 00:31:05.154 "assigned_rate_limits": { 00:31:05.154 "rw_ios_per_sec": 0, 00:31:05.154 "rw_mbytes_per_sec": 0, 00:31:05.154 "r_mbytes_per_sec": 0, 00:31:05.154 "w_mbytes_per_sec": 0 00:31:05.154 }, 00:31:05.154 "claimed": true, 00:31:05.154 "claim_type": "read_many_write_one", 00:31:05.154 "zoned": false, 00:31:05.154 "supported_io_types": { 00:31:05.154 "read": true, 00:31:05.154 "write": true, 00:31:05.154 "unmap": true, 00:31:05.154 "flush": true, 00:31:05.154 "reset": true, 00:31:05.154 "nvme_admin": true, 00:31:05.154 "nvme_io": true, 00:31:05.154 "nvme_io_md": false, 00:31:05.154 "write_zeroes": true, 00:31:05.154 "zcopy": false, 00:31:05.154 "get_zone_info": false, 00:31:05.154 "zone_management": false, 00:31:05.154 "zone_append": false, 00:31:05.154 "compare": true, 00:31:05.154 "compare_and_write": false, 00:31:05.154 "abort": true, 00:31:05.154 "seek_hole": false, 00:31:05.154 "seek_data": false, 00:31:05.154 "copy": true, 00:31:05.154 "nvme_iov_md": false 00:31:05.154 }, 00:31:05.154 "driver_specific": { 00:31:05.154 "nvme": [ 00:31:05.154 { 00:31:05.154 "pci_address": "0000:00:11.0", 00:31:05.154 "trid": { 00:31:05.154 "trtype": "PCIe", 00:31:05.154 "traddr": "0000:00:11.0" 00:31:05.154 }, 00:31:05.154 "ctrlr_data": { 00:31:05.154 "cntlid": 0, 00:31:05.154 "vendor_id": "0x1b36", 00:31:05.154 "model_number": "QEMU NVMe Ctrl", 00:31:05.154 "serial_number": "12341", 00:31:05.154 "firmware_revision": "8.0.0", 00:31:05.154 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:05.154 "oacs": { 00:31:05.154 "security": 0, 00:31:05.154 "format": 1, 00:31:05.154 "firmware": 0, 00:31:05.154 "ns_manage": 1 00:31:05.154 }, 00:31:05.154 "multi_ctrlr": false, 00:31:05.154 "ana_reporting": false 00:31:05.154 }, 00:31:05.154 "vs": { 00:31:05.154 "nvme_version": "1.4" 00:31:05.154 }, 00:31:05.154 "ns_data": { 00:31:05.154 "id": 1, 00:31:05.154 "can_share": false 00:31:05.154 } 00:31:05.154 } 00:31:05.154 ], 00:31:05.154 "mp_policy": "active_passive" 00:31:05.154 } 00:31:05.154 } 00:31:05.154 ]' 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:05.154 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:05.413 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:05.413 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:05.413 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0ebe1643-5c42-4c36-a473-9d046ca1f57d 00:31:05.413 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:05.413 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ebe1643-5c42-4c36-a473-9d046ca1f57d 00:31:05.672 10:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:05.930 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=95e8261b-9ee3-4a8f-a9a9-d453a0e9d640 00:31:05.930 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 95e8261b-9ee3-4a8f-a9a9-d453a0e9d640 00:31:06.188 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4e7b44b1-9531-4e98-832a-018cac652899 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4e7b44b1-9531-4e98-832a-018cac652899 ]] 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4e7b44b1-9531-4e98-832a-018cac652899 5120 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4e7b44b1-9531-4e98-832a-018cac652899 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4e7b44b1-9531-4e98-832a-018cac652899 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4e7b44b1-9531-4e98-832a-018cac652899 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:06.189 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e7b44b1-9531-4e98-832a-018cac652899 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:06.447 { 00:31:06.447 "name": "4e7b44b1-9531-4e98-832a-018cac652899", 00:31:06.447 "aliases": [ 00:31:06.447 "lvs/basen1p0" 00:31:06.447 ], 00:31:06.447 "product_name": "Logical Volume", 00:31:06.447 "block_size": 4096, 00:31:06.447 "num_blocks": 5242880, 00:31:06.447 "uuid": "4e7b44b1-9531-4e98-832a-018cac652899", 00:31:06.447 "assigned_rate_limits": { 00:31:06.447 "rw_ios_per_sec": 0, 00:31:06.447 "rw_mbytes_per_sec": 0, 00:31:06.447 "r_mbytes_per_sec": 0, 00:31:06.447 "w_mbytes_per_sec": 0 00:31:06.447 }, 00:31:06.447 "claimed": false, 00:31:06.447 "zoned": false, 00:31:06.447 "supported_io_types": { 00:31:06.447 "read": true, 00:31:06.447 "write": true, 00:31:06.447 "unmap": true, 00:31:06.447 "flush": false, 00:31:06.447 "reset": true, 00:31:06.447 "nvme_admin": false, 00:31:06.447 "nvme_io": false, 00:31:06.447 "nvme_io_md": false, 00:31:06.447 "write_zeroes": true, 00:31:06.447 "zcopy": false, 00:31:06.447 "get_zone_info": false, 00:31:06.447 "zone_management": false, 00:31:06.447 "zone_append": false, 00:31:06.447 "compare": false, 00:31:06.447 "compare_and_write": false, 00:31:06.447 "abort": false, 00:31:06.447 "seek_hole": true, 00:31:06.447 "seek_data": true, 00:31:06.447 "copy": false, 00:31:06.447 "nvme_iov_md": false 00:31:06.447 }, 00:31:06.447 "driver_specific": { 00:31:06.447 "lvol": { 00:31:06.447 "lvol_store_uuid": "95e8261b-9ee3-4a8f-a9a9-d453a0e9d640", 00:31:06.447 "base_bdev": "basen1", 00:31:06.447 "thin_provision": true, 00:31:06.447 "num_allocated_clusters": 0, 00:31:06.447 "snapshot": false, 00:31:06.447 "clone": false, 00:31:06.447 "esnap_clone": false 00:31:06.447 } 00:31:06.447 } 00:31:06.447 } 00:31:06.447 ]' 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:06.447 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:06.706 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:06.706 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:06.706 10:43:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:06.965 10:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:06.965 10:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:06.965 10:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4e7b44b1-9531-4e98-832a-018cac652899 -c cachen1p0 --l2p_dram_limit 2 00:31:07.225 [2024-12-07 10:43:06.338225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.225 [2024-12-07 10:43:06.338280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:07.225 [2024-12-07 10:43:06.338299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:07.226 [2024-12-07 10:43:06.338310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.338369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.338382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:07.226 [2024-12-07 10:43:06.338396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:31:07.226 [2024-12-07 10:43:06.338406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.338430] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:07.226 [2024-12-07 10:43:06.339493] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:07.226 [2024-12-07 10:43:06.339531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.339541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:07.226 [2024-12-07 10:43:06.339557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:31:07.226 [2024-12-07 10:43:06.339567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.339602] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 4ca5ef52-9281-4b28-a259-52fb41b59830 00:31:07.226 [2024-12-07 10:43:06.342036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.342069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:07.226 [2024-12-07 10:43:06.342081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:07.226 [2024-12-07 10:43:06.342094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.356300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.356335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:07.226 [2024-12-07 10:43:06.356348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.158 ms 00:31:07.226 [2024-12-07 10:43:06.356361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.356448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.356465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:07.226 [2024-12-07 10:43:06.356476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:07.226 [2024-12-07 10:43:06.356492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.356539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.356555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:07.226 [2024-12-07 10:43:06.356569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:07.226 [2024-12-07 10:43:06.356583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.356613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:07.226 [2024-12-07 10:43:06.362615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.362652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:07.226 [2024-12-07 10:43:06.362670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.014 ms 00:31:07.226 [2024-12-07 10:43:06.362680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.362713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.362723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:07.226 [2024-12-07 10:43:06.362737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:07.226 [2024-12-07 10:43:06.362746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.362798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:07.226 [2024-12-07 10:43:06.362931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:07.226 [2024-12-07 10:43:06.362952] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:07.226 [2024-12-07 10:43:06.362965] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:07.226 [2024-12-07 10:43:06.362994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363006] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363022] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:07.226 [2024-12-07 10:43:06.363032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:07.226 [2024-12-07 10:43:06.363051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:07.226 [2024-12-07 10:43:06.363077] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:07.226 [2024-12-07 10:43:06.363091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.363100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:07.226 [2024-12-07 10:43:06.363114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:31:07.226 [2024-12-07 10:43:06.363124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.363199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.226 [2024-12-07 10:43:06.363220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:07.226 [2024-12-07 10:43:06.363234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:07.226 [2024-12-07 10:43:06.363243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.226 [2024-12-07 10:43:06.363338] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:07.226 [2024-12-07 10:43:06.363352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:07.226 [2024-12-07 10:43:06.363365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:07.226 [2024-12-07 10:43:06.363397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:07.226 [2024-12-07 10:43:06.363419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:07.226 [2024-12-07 10:43:06.363433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:07.226 [2024-12-07 10:43:06.363441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:07.226 [2024-12-07 10:43:06.363462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:07.226 [2024-12-07 10:43:06.363474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:07.226 [2024-12-07 10:43:06.363495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:07.226 [2024-12-07 10:43:06.363503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:07.226 [2024-12-07 10:43:06.363526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:07.226 [2024-12-07 10:43:06.363536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:07.226 [2024-12-07 10:43:06.363557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:07.226 [2024-12-07 10:43:06.363566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:07.226 [2024-12-07 10:43:06.363585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:07.226 [2024-12-07 10:43:06.363597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:07.226 [2024-12-07 10:43:06.363616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:07.226 [2024-12-07 10:43:06.363624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:07.226 [2024-12-07 10:43:06.363644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:07.226 [2024-12-07 10:43:06.363655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:07.226 [2024-12-07 10:43:06.363677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:07.226 [2024-12-07 10:43:06.363686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:07.226 [2024-12-07 10:43:06.363706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:07.226 [2024-12-07 10:43:06.363737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:07.226 [2024-12-07 10:43:06.363766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:07.226 [2024-12-07 10:43:06.363780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:07.226 [2024-12-07 10:43:06.363800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:07.226 [2024-12-07 10:43:06.363809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:07.226 [2024-12-07 10:43:06.363823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.226 [2024-12-07 10:43:06.363833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:07.226 [2024-12-07 10:43:06.363848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:07.227 [2024-12-07 10:43:06.363857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:07.227 [2024-12-07 10:43:06.363870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:07.227 [2024-12-07 10:43:06.363878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:07.227 [2024-12-07 10:43:06.363890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:07.227 [2024-12-07 10:43:06.363901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:07.227 [2024-12-07 10:43:06.363919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.363930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:07.227 [2024-12-07 10:43:06.363942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.363952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.363964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:07.227 [2024-12-07 10:43:06.363989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:07.227 [2024-12-07 10:43:06.364004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:07.227 [2024-12-07 10:43:06.364014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:07.227 [2024-12-07 10:43:06.364027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:07.227 [2024-12-07 10:43:06.364109] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:07.227 [2024-12-07 10:43:06.364126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:07.227 [2024-12-07 10:43:06.364150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:07.227 [2024-12-07 10:43:06.364159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:07.227 [2024-12-07 10:43:06.364172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:07.227 [2024-12-07 10:43:06.364182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.227 [2024-12-07 10:43:06.364195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:07.227 [2024-12-07 10:43:06.364204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.902 ms 00:31:07.227 [2024-12-07 10:43:06.364217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.227 [2024-12-07 10:43:06.364256] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:07.227 [2024-12-07 10:43:06.364277] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:10.518 [2024-12-07 10:43:09.751537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.751814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:10.518 [2024-12-07 10:43:09.751856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3392.778 ms 00:31:10.518 [2024-12-07 10:43:09.751872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.785476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.785522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:10.518 [2024-12-07 10:43:09.785539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.326 ms 00:31:10.518 [2024-12-07 10:43:09.785553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.785636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.785653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:10.518 [2024-12-07 10:43:09.785664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:10.518 [2024-12-07 10:43:09.785684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.826072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.826128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:10.518 [2024-12-07 10:43:09.826143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.395 ms 00:31:10.518 [2024-12-07 10:43:09.826173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.826210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.826239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:10.518 [2024-12-07 10:43:09.826250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:10.518 [2024-12-07 10:43:09.826262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.826758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.826776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:10.518 [2024-12-07 10:43:09.826798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.444 ms 00:31:10.518 [2024-12-07 10:43:09.826811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.826851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.826864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:10.518 [2024-12-07 10:43:09.826877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:10.518 [2024-12-07 10:43:09.826893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.518 [2024-12-07 10:43:09.846780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.518 [2024-12-07 10:43:09.846820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:10.518 [2024-12-07 10:43:09.846834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.900 ms 00:31:10.518 [2024-12-07 10:43:09.846862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:09.885264] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:10.778 [2024-12-07 10:43:09.886503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:09.886539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:10.778 [2024-12-07 10:43:09.886560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.618 ms 00:31:10.778 [2024-12-07 10:43:09.886575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:09.921205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:09.921244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:10.778 [2024-12-07 10:43:09.921260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.636 ms 00:31:10.778 [2024-12-07 10:43:09.921271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:09.921360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:09.921375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:10.778 [2024-12-07 10:43:09.921391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:31:10.778 [2024-12-07 10:43:09.921401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:09.955923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:09.955963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:10.778 [2024-12-07 10:43:09.956002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.522 ms 00:31:10.778 [2024-12-07 10:43:09.956019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:09.992716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:09.992750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:10.778 [2024-12-07 10:43:09.992766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.688 ms 00:31:10.778 [2024-12-07 10:43:09.992776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:09.993494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:09.993522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:10.778 [2024-12-07 10:43:09.993538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.677 ms 00:31:10.778 [2024-12-07 10:43:09.993551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.778 [2024-12-07 10:43:10.094160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.778 [2024-12-07 10:43:10.094203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:10.778 [2024-12-07 10:43:10.094225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.689 ms 00:31:10.778 [2024-12-07 10:43:10.094236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.038 [2024-12-07 10:43:10.130593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.038 [2024-12-07 10:43:10.130634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:11.038 [2024-12-07 10:43:10.130659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.313 ms 00:31:11.038 [2024-12-07 10:43:10.130669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.038 [2024-12-07 10:43:10.166851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.038 [2024-12-07 10:43:10.167035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:11.038 [2024-12-07 10:43:10.167063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.190 ms 00:31:11.038 [2024-12-07 10:43:10.167074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.038 [2024-12-07 10:43:10.203124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.038 [2024-12-07 10:43:10.203158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:11.038 [2024-12-07 10:43:10.203175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.030 ms 00:31:11.038 [2024-12-07 10:43:10.203185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.038 [2024-12-07 10:43:10.203234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.038 [2024-12-07 10:43:10.203246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:11.038 [2024-12-07 10:43:10.203263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:11.038 [2024-12-07 10:43:10.203272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.038 [2024-12-07 10:43:10.203389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.038 [2024-12-07 10:43:10.203404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:11.038 [2024-12-07 10:43:10.203417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:31:11.038 [2024-12-07 10:43:10.203427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.038 [2024-12-07 10:43:10.204450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3872.053 ms, result 0 00:31:11.038 { 00:31:11.038 "name": "ftl", 00:31:11.038 "uuid": "4ca5ef52-9281-4b28-a259-52fb41b59830" 00:31:11.038 } 00:31:11.038 10:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:11.298 [2024-12-07 10:43:10.399356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.299 10:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:11.299 10:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:11.559 [2024-12-07 10:43:10.743137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:11.559 10:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:11.819 [2024-12-07 10:43:10.944472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:11.819 10:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:12.079 Fill FTL, iteration 1 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83746 00:31:12.079 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83746 /var/tmp/spdk.tgt.sock 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83746 ']' 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:12.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.080 10:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:12.080 [2024-12-07 10:43:11.409447] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:12.080 [2024-12-07 10:43:11.410344] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83746 ] 00:31:12.339 [2024-12-07 10:43:11.586203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.599 [2024-12-07 10:43:11.703567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.537 10:43:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.537 10:43:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:13.537 10:43:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:13.797 ftln1 00:31:13.797 10:43:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:13.797 10:43:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:13.797 10:43:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:13.797 10:43:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83746 00:31:13.797 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83746 ']' 00:31:13.797 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83746 00:31:13.797 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:13.797 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.057 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83746 00:31:14.057 killing process with pid 83746 00:31:14.057 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:14.057 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:14.057 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83746' 00:31:14.057 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83746 00:31:14.057 10:43:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83746 00:31:16.595 10:43:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:16.595 10:43:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:16.595 [2024-12-07 10:43:15.656185] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:16.595 [2024-12-07 10:43:15.656302] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83806 ] 00:31:16.595 [2024-12-07 10:43:15.836412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.854 [2024-12-07 10:43:15.961860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.279  [2024-12-07T10:43:18.593Z] Copying: 275/1024 [MB] (275 MBps) [2024-12-07T10:43:19.531Z] Copying: 549/1024 [MB] (274 MBps) [2024-12-07T10:43:20.467Z] Copying: 815/1024 [MB] (266 MBps) [2024-12-07T10:43:21.842Z] Copying: 1024/1024 [MB] (average 269 MBps) 00:31:22.489 00:31:22.489 Calculate MD5 checksum, iteration 1 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:22.489 10:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:22.489 [2024-12-07 10:43:21.561062] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:22.489 [2024-12-07 10:43:21.561370] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83870 ] 00:31:22.489 [2024-12-07 10:43:21.742671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.747 [2024-12-07 10:43:21.872742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.120  [2024-12-07T10:43:24.038Z] Copying: 661/1024 [MB] (661 MBps) [2024-12-07T10:43:25.415Z] Copying: 1024/1024 [MB] (average 636 MBps) 00:31:26.062 00:31:26.062 10:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:26.062 10:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:27.442 Fill FTL, iteration 2 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=abe4d25d72bab128dbdf48da2db3961d 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:27.442 10:43:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:27.442 [2024-12-07 10:43:26.756686] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:27.442 [2024-12-07 10:43:26.757138] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83926 ] 00:31:27.701 [2024-12-07 10:43:26.936947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.960 [2024-12-07 10:43:27.067702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.338  [2024-12-07T10:43:29.631Z] Copying: 275/1024 [MB] (275 MBps) [2024-12-07T10:43:31.009Z] Copying: 547/1024 [MB] (272 MBps) [2024-12-07T10:43:31.580Z] Copying: 816/1024 [MB] (269 MBps) [2024-12-07T10:43:32.960Z] Copying: 1024/1024 [MB] (average 270 MBps) 00:31:33.607 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:33.607 Calculate MD5 checksum, iteration 2 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:33.607 10:43:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:33.607 [2024-12-07 10:43:32.670749] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:33.607 [2024-12-07 10:43:32.671215] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83984 ] 00:31:33.607 [2024-12-07 10:43:32.851587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.866 [2024-12-07 10:43:32.970083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.776  [2024-12-07T10:43:35.389Z] Copying: 648/1024 [MB] (648 MBps) [2024-12-07T10:43:36.762Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:31:37.409 00:31:37.667 10:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:37.667 10:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:39.567 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:39.567 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5662478ea9222ebad12bd48217bd0ea6 00:31:39.567 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:39.567 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:39.567 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:39.567 [2024-12-07 10:43:38.630895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.567 [2024-12-07 10:43:38.631128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:39.567 [2024-12-07 10:43:38.631158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:39.567 [2024-12-07 10:43:38.631171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.567 [2024-12-07 10:43:38.631214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.567 [2024-12-07 10:43:38.631233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:39.567 [2024-12-07 10:43:38.631245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:39.567 [2024-12-07 10:43:38.631256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.567 [2024-12-07 10:43:38.631278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.567 [2024-12-07 10:43:38.631290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:39.567 [2024-12-07 10:43:38.631301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:39.567 [2024-12-07 10:43:38.631311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.567 [2024-12-07 10:43:38.631381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.470 ms, result 0 00:31:39.567 true 00:31:39.567 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.567 { 00:31:39.567 "name": "ftl", 00:31:39.567 "properties": [ 00:31:39.567 { 00:31:39.567 "name": "superblock_version", 00:31:39.567 "value": 5, 00:31:39.567 "read-only": true 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "name": "base_device", 00:31:39.567 "bands": [ 00:31:39.567 { 00:31:39.567 "id": 0, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 1, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 2, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 3, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 4, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 5, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 6, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 7, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 8, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 9, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 10, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 11, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 12, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 13, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 14, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 15, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 16, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 17, 00:31:39.567 "state": "FREE", 00:31:39.567 "validity": 0.0 00:31:39.567 } 00:31:39.567 ], 00:31:39.567 "read-only": true 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "name": "cache_device", 00:31:39.567 "type": "bdev", 00:31:39.567 "chunks": [ 00:31:39.567 { 00:31:39.567 "id": 0, 00:31:39.567 "state": "INACTIVE", 00:31:39.567 "utilization": 0.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 1, 00:31:39.567 "state": "CLOSED", 00:31:39.567 "utilization": 1.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 2, 00:31:39.567 "state": "CLOSED", 00:31:39.567 "utilization": 1.0 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 3, 00:31:39.567 "state": "OPEN", 00:31:39.567 "utilization": 0.001953125 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "id": 4, 00:31:39.567 "state": "OPEN", 00:31:39.567 "utilization": 0.0 00:31:39.567 } 00:31:39.567 ], 00:31:39.567 "read-only": true 00:31:39.567 }, 00:31:39.567 { 00:31:39.567 "name": "verbose_mode", 00:31:39.568 "value": true, 00:31:39.568 "unit": "", 00:31:39.568 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:39.568 }, 00:31:39.568 { 00:31:39.568 "name": "prep_upgrade_on_shutdown", 00:31:39.568 "value": false, 00:31:39.568 "unit": "", 00:31:39.568 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:39.568 } 00:31:39.568 ] 00:31:39.568 } 00:31:39.568 10:43:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:39.827 [2024-12-07 10:43:39.042832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.827 [2024-12-07 10:43:39.042877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:39.827 [2024-12-07 10:43:39.042891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:39.827 [2024-12-07 10:43:39.042917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.827 [2024-12-07 10:43:39.042943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.827 [2024-12-07 10:43:39.042954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:39.827 [2024-12-07 10:43:39.042963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:39.827 [2024-12-07 10:43:39.042973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.827 [2024-12-07 10:43:39.043007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.827 [2024-12-07 10:43:39.043018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:39.827 [2024-12-07 10:43:39.043028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:39.827 [2024-12-07 10:43:39.043039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.827 [2024-12-07 10:43:39.043096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:31:39.827 true 00:31:39.827 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:39.827 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:39.827 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:40.086 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:40.086 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:40.086 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:40.346 [2024-12-07 10:43:39.490847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:40.346 [2024-12-07 10:43:39.490894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:40.346 [2024-12-07 10:43:39.490908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:40.346 [2024-12-07 10:43:39.490919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:40.346 [2024-12-07 10:43:39.490942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:40.346 [2024-12-07 10:43:39.490952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:40.346 [2024-12-07 10:43:39.490962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:40.346 [2024-12-07 10:43:39.490972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:40.346 [2024-12-07 10:43:39.491007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:40.346 [2024-12-07 10:43:39.491017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:40.346 [2024-12-07 10:43:39.491027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:40.346 [2024-12-07 10:43:39.491036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:40.346 [2024-12-07 10:43:39.491094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.232 ms, result 0 00:31:40.346 true 00:31:40.346 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:40.606 { 00:31:40.606 "name": "ftl", 00:31:40.606 "properties": [ 00:31:40.606 { 00:31:40.606 "name": "superblock_version", 00:31:40.606 "value": 5, 00:31:40.606 "read-only": true 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "name": "base_device", 00:31:40.606 "bands": [ 00:31:40.606 { 00:31:40.606 "id": 0, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 1, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 2, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 3, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 4, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 5, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 6, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 7, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 8, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 9, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 10, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 11, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 12, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 13, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 14, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 15, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 16, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 17, 00:31:40.606 "state": "FREE", 00:31:40.606 "validity": 0.0 00:31:40.606 } 00:31:40.606 ], 00:31:40.606 "read-only": true 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "name": "cache_device", 00:31:40.606 "type": "bdev", 00:31:40.606 "chunks": [ 00:31:40.606 { 00:31:40.606 "id": 0, 00:31:40.606 "state": "INACTIVE", 00:31:40.606 "utilization": 0.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 1, 00:31:40.606 "state": "CLOSED", 00:31:40.606 "utilization": 1.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 2, 00:31:40.606 "state": "CLOSED", 00:31:40.606 "utilization": 1.0 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 3, 00:31:40.606 "state": "OPEN", 00:31:40.606 "utilization": 0.001953125 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "id": 4, 00:31:40.606 "state": "OPEN", 00:31:40.606 "utilization": 0.0 00:31:40.606 } 00:31:40.606 ], 00:31:40.606 "read-only": true 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "name": "verbose_mode", 00:31:40.606 "value": true, 00:31:40.606 "unit": "", 00:31:40.606 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:40.606 }, 00:31:40.606 { 00:31:40.606 "name": "prep_upgrade_on_shutdown", 00:31:40.606 "value": true, 00:31:40.606 "unit": "", 00:31:40.606 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:40.606 } 00:31:40.606 ] 00:31:40.606 } 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83618 ]] 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83618 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83618 ']' 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83618 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83618 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83618' 00:31:40.606 killing process with pid 83618 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83618 00:31:40.606 10:43:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83618 00:31:41.546 [2024-12-07 10:43:40.828530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:41.546 [2024-12-07 10:43:40.848452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.546 [2024-12-07 10:43:40.848491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:41.546 [2024-12-07 10:43:40.848506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:41.546 [2024-12-07 10:43:40.848516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.546 [2024-12-07 10:43:40.848538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:41.546 [2024-12-07 10:43:40.852574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.546 [2024-12-07 10:43:40.852606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:41.546 [2024-12-07 10:43:40.852618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.027 ms 00:31:41.546 [2024-12-07 10:43:40.852634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.836279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.836344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:49.668 [2024-12-07 10:43:47.836361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6994.956 ms 00:31:49.668 [2024-12-07 10:43:47.836376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.837463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.837486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:49.668 [2024-12-07 10:43:47.837498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:31:49.668 [2024-12-07 10:43:47.837508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.838511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.838539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:49.668 [2024-12-07 10:43:47.838551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.908 ms 00:31:49.668 [2024-12-07 10:43:47.838568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.853092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.853125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:49.668 [2024-12-07 10:43:47.853138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.512 ms 00:31:49.668 [2024-12-07 10:43:47.853148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.861859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.861896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:49.668 [2024-12-07 10:43:47.861909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.689 ms 00:31:49.668 [2024-12-07 10:43:47.861918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.862024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.862038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:49.668 [2024-12-07 10:43:47.862055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:31:49.668 [2024-12-07 10:43:47.862064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.876409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.876441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:49.668 [2024-12-07 10:43:47.876453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.352 ms 00:31:49.668 [2024-12-07 10:43:47.876462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.890776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.890823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:49.668 [2024-12-07 10:43:47.890836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.303 ms 00:31:49.668 [2024-12-07 10:43:47.890846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.904806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.904946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:49.668 [2024-12-07 10:43:47.904965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.946 ms 00:31:49.668 [2024-12-07 10:43:47.905003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.919143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.919298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:49.668 [2024-12-07 10:43:47.919317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.083 ms 00:31:49.668 [2024-12-07 10:43:47.919328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.919362] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:49.668 [2024-12-07 10:43:47.919390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:49.668 [2024-12-07 10:43:47.919402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:49.668 [2024-12-07 10:43:47.919413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:49.668 [2024-12-07 10:43:47.919424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:49.668 [2024-12-07 10:43:47.919582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:49.668 [2024-12-07 10:43:47.919592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4ca5ef52-9281-4b28-a259-52fb41b59830 00:31:49.668 [2024-12-07 10:43:47.919603] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:49.668 [2024-12-07 10:43:47.919612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:49.668 [2024-12-07 10:43:47.919621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:49.668 [2024-12-07 10:43:47.919632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:49.668 [2024-12-07 10:43:47.919642] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:49.668 [2024-12-07 10:43:47.919657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:49.668 [2024-12-07 10:43:47.919667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:49.668 [2024-12-07 10:43:47.919676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:49.668 [2024-12-07 10:43:47.919686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:49.668 [2024-12-07 10:43:47.919697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.919711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:49.668 [2024-12-07 10:43:47.919722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:31:49.668 [2024-12-07 10:43:47.919732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.938440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.938571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:49.668 [2024-12-07 10:43:47.938592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.707 ms 00:31:49.668 [2024-12-07 10:43:47.938625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:47.939192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.668 [2024-12-07 10:43:47.939208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:49.668 [2024-12-07 10:43:47.939220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:31:49.668 [2024-12-07 10:43:47.939231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:48.000880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.668 [2024-12-07 10:43:48.000914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:49.668 [2024-12-07 10:43:48.000931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.668 [2024-12-07 10:43:48.000942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:48.000972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.668 [2024-12-07 10:43:48.000993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:49.668 [2024-12-07 10:43:48.001003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.668 [2024-12-07 10:43:48.001012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.668 [2024-12-07 10:43:48.001095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.001127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:49.669 [2024-12-07 10:43:48.001137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.001167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.001184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.001194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:49.669 [2024-12-07 10:43:48.001204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.001215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.117854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.117905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:49.669 [2024-12-07 10:43:48.117921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.117938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.215480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.215526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:49.669 [2024-12-07 10:43:48.215541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.215551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.215646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.215657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:49.669 [2024-12-07 10:43:48.215668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.215678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.215726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.215737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:49.669 [2024-12-07 10:43:48.215746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.215756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.215869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.215881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:49.669 [2024-12-07 10:43:48.215891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.215900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.215934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.215950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:49.669 [2024-12-07 10:43:48.215960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.215969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.216025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.216036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:49.669 [2024-12-07 10:43:48.216046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.216055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.216101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.669 [2024-12-07 10:43:48.216113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:49.669 [2024-12-07 10:43:48.216122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.669 [2024-12-07 10:43:48.216132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.669 [2024-12-07 10:43:48.216264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7379.750 ms, result 0 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84187 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84187 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84187 ']' 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.959 10:43:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:52.959 [2024-12-07 10:43:51.967619] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:52.959 [2024-12-07 10:43:51.968057] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84187 ] 00:31:52.959 [2024-12-07 10:43:52.165530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.959 [2024-12-07 10:43:52.299992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.406 [2024-12-07 10:43:53.282795] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:54.406 [2024-12-07 10:43:53.282870] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:54.406 [2024-12-07 10:43:53.428953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.429010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:54.406 [2024-12-07 10:43:53.429025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:54.406 [2024-12-07 10:43:53.429035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.429092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.429105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:54.406 [2024-12-07 10:43:53.429114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:54.406 [2024-12-07 10:43:53.429124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.429168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:54.406 [2024-12-07 10:43:53.430163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:54.406 [2024-12-07 10:43:53.430191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.430202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:54.406 [2024-12-07 10:43:53.430213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:31:54.406 [2024-12-07 10:43:53.430223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.431681] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:54.406 [2024-12-07 10:43:53.449850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.449885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:54.406 [2024-12-07 10:43:53.449904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.199 ms 00:31:54.406 [2024-12-07 10:43:53.449913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.449988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.450001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:54.406 [2024-12-07 10:43:53.450011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:54.406 [2024-12-07 10:43:53.450020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.456821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.456960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:54.406 [2024-12-07 10:43:53.457009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.737 ms 00:31:54.406 [2024-12-07 10:43:53.457020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.457090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.457103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:54.406 [2024-12-07 10:43:53.457115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:54.406 [2024-12-07 10:43:53.457125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.457170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.457186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:54.406 [2024-12-07 10:43:53.457197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:54.406 [2024-12-07 10:43:53.457207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.457234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:54.406 [2024-12-07 10:43:53.462083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.462114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:54.406 [2024-12-07 10:43:53.462126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.864 ms 00:31:54.406 [2024-12-07 10:43:53.462156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.462186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.462197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:54.406 [2024-12-07 10:43:53.462207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:54.406 [2024-12-07 10:43:53.462217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.462272] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:54.406 [2024-12-07 10:43:53.462301] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:54.406 [2024-12-07 10:43:53.462334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:54.406 [2024-12-07 10:43:53.462351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:54.406 [2024-12-07 10:43:53.462456] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:54.406 [2024-12-07 10:43:53.462469] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:54.406 [2024-12-07 10:43:53.462482] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:54.406 [2024-12-07 10:43:53.462495] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:54.406 [2024-12-07 10:43:53.462506] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:54.406 [2024-12-07 10:43:53.462521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:54.406 [2024-12-07 10:43:53.462531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:54.406 [2024-12-07 10:43:53.462541] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:54.406 [2024-12-07 10:43:53.462551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:54.406 [2024-12-07 10:43:53.462562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.462572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:54.406 [2024-12-07 10:43:53.462582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:31:54.406 [2024-12-07 10:43:53.462592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.462673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.406 [2024-12-07 10:43:53.462685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:54.406 [2024-12-07 10:43:53.462699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:31:54.406 [2024-12-07 10:43:53.462709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.406 [2024-12-07 10:43:53.462798] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:54.406 [2024-12-07 10:43:53.462811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:54.406 [2024-12-07 10:43:53.462821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:54.406 [2024-12-07 10:43:53.462831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.406 [2024-12-07 10:43:53.462842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:54.406 [2024-12-07 10:43:53.462851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:54.406 [2024-12-07 10:43:53.462861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:54.406 [2024-12-07 10:43:53.462871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:54.406 [2024-12-07 10:43:53.462880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:54.406 [2024-12-07 10:43:53.462890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.406 [2024-12-07 10:43:53.462903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:54.406 [2024-12-07 10:43:53.462913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:54.406 [2024-12-07 10:43:53.462922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.406 [2024-12-07 10:43:53.462931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:54.406 [2024-12-07 10:43:53.462941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:54.406 [2024-12-07 10:43:53.462950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.406 [2024-12-07 10:43:53.462959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:54.406 [2024-12-07 10:43:53.462968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:54.406 [2024-12-07 10:43:53.462993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.406 [2024-12-07 10:43:53.463003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:54.406 [2024-12-07 10:43:53.463013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:54.406 [2024-12-07 10:43:53.463021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:54.406 [2024-12-07 10:43:53.463030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:54.406 [2024-12-07 10:43:53.463050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:54.406 [2024-12-07 10:43:53.463060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:54.407 [2024-12-07 10:43:53.463069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:54.407 [2024-12-07 10:43:53.463078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:54.407 [2024-12-07 10:43:53.463087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:54.407 [2024-12-07 10:43:53.463096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:54.407 [2024-12-07 10:43:53.463105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:54.407 [2024-12-07 10:43:53.463114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:54.407 [2024-12-07 10:43:53.463123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:54.407 [2024-12-07 10:43:53.463133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:54.407 [2024-12-07 10:43:53.463142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.407 [2024-12-07 10:43:53.463151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:54.407 [2024-12-07 10:43:53.463160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:54.407 [2024-12-07 10:43:53.463169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.407 [2024-12-07 10:43:53.463178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:54.407 [2024-12-07 10:43:53.463187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:54.407 [2024-12-07 10:43:53.463196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.407 [2024-12-07 10:43:53.463205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:54.407 [2024-12-07 10:43:53.463214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:54.407 [2024-12-07 10:43:53.463225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.407 [2024-12-07 10:43:53.463234] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:54.407 [2024-12-07 10:43:53.463244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:54.407 [2024-12-07 10:43:53.463253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:54.407 [2024-12-07 10:43:53.463263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:54.407 [2024-12-07 10:43:53.463277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:54.407 [2024-12-07 10:43:53.463287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:54.407 [2024-12-07 10:43:53.463296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:54.407 [2024-12-07 10:43:53.463306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:54.407 [2024-12-07 10:43:53.463315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:54.407 [2024-12-07 10:43:53.463325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:54.407 [2024-12-07 10:43:53.463340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:54.407 [2024-12-07 10:43:53.463353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:54.407 [2024-12-07 10:43:53.463375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:54.407 [2024-12-07 10:43:53.463406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:54.407 [2024-12-07 10:43:53.463416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:54.407 [2024-12-07 10:43:53.463426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:54.407 [2024-12-07 10:43:53.463436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:54.407 [2024-12-07 10:43:53.463507] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:54.407 [2024-12-07 10:43:53.463519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:54.407 [2024-12-07 10:43:53.463540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:54.407 [2024-12-07 10:43:53.463549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:54.407 [2024-12-07 10:43:53.463562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:54.407 [2024-12-07 10:43:53.463572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.407 [2024-12-07 10:43:53.463583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:54.407 [2024-12-07 10:43:53.463593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.830 ms 00:31:54.407 [2024-12-07 10:43:53.463603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.407 [2024-12-07 10:43:53.463648] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:54.407 [2024-12-07 10:43:53.463662] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:58.601 [2024-12-07 10:43:57.068773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.068834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:58.601 [2024-12-07 10:43:57.068851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3610.978 ms 00:31:58.601 [2024-12-07 10:43:57.068862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.105527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.105578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:58.601 [2024-12-07 10:43:57.105593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.404 ms 00:31:58.601 [2024-12-07 10:43:57.105604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.105703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.105723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:58.601 [2024-12-07 10:43:57.105735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:58.601 [2024-12-07 10:43:57.105745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.151073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.151119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:58.601 [2024-12-07 10:43:57.151137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.356 ms 00:31:58.601 [2024-12-07 10:43:57.151148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.151195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.151206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:58.601 [2024-12-07 10:43:57.151217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:58.601 [2024-12-07 10:43:57.151227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.151712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.151727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:58.601 [2024-12-07 10:43:57.151739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:31:58.601 [2024-12-07 10:43:57.151751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.151798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.151810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:58.601 [2024-12-07 10:43:57.151821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:58.601 [2024-12-07 10:43:57.151831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.171738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.171947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:58.601 [2024-12-07 10:43:57.171969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.916 ms 00:31:58.601 [2024-12-07 10:43:57.171998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.215333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:58.601 [2024-12-07 10:43:57.215375] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:58.601 [2024-12-07 10:43:57.215392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.215403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:58.601 [2024-12-07 10:43:57.215415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.337 ms 00:31:58.601 [2024-12-07 10:43:57.215426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.234931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.234969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:58.601 [2024-12-07 10:43:57.235010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.492 ms 00:31:58.601 [2024-12-07 10:43:57.235021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.252706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.252752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:58.601 [2024-12-07 10:43:57.252766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.669 ms 00:31:58.601 [2024-12-07 10:43:57.252776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.270611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.270651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:58.601 [2024-12-07 10:43:57.270664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.823 ms 00:31:58.601 [2024-12-07 10:43:57.270673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.271409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.601 [2024-12-07 10:43:57.271436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:58.601 [2024-12-07 10:43:57.271448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.615 ms 00:31:58.601 [2024-12-07 10:43:57.271459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.601 [2024-12-07 10:43:57.357040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.357114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:58.602 [2024-12-07 10:43:57.357132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.695 ms 00:31:58.602 [2024-12-07 10:43:57.357143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.367765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:58.602 [2024-12-07 10:43:57.368481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.368507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:58.602 [2024-12-07 10:43:57.368520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.304 ms 00:31:58.602 [2024-12-07 10:43:57.368530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.368634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.368651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:58.602 [2024-12-07 10:43:57.368665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:58.602 [2024-12-07 10:43:57.368675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.368741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.368755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:58.602 [2024-12-07 10:43:57.368767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:58.602 [2024-12-07 10:43:57.368777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.368800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.368812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:58.602 [2024-12-07 10:43:57.368827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:58.602 [2024-12-07 10:43:57.368839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.368876] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:58.602 [2024-12-07 10:43:57.368889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.368900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:58.602 [2024-12-07 10:43:57.368910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:58.602 [2024-12-07 10:43:57.368921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.404569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.404759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:58.602 [2024-12-07 10:43:57.404781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.685 ms 00:31:58.602 [2024-12-07 10:43:57.404792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.404906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.602 [2024-12-07 10:43:57.404919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:58.602 [2024-12-07 10:43:57.404931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:58.602 [2024-12-07 10:43:57.404941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.602 [2024-12-07 10:43:57.406051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3983.088 ms, result 0 00:31:58.602 [2024-12-07 10:43:57.421101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.602 [2024-12-07 10:43:57.437116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:58.602 [2024-12-07 10:43:57.445953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:58.602 10:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.602 10:43:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:58.602 10:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:58.602 10:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:58.602 10:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:58.860 [2024-12-07 10:43:58.049337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.860 [2024-12-07 10:43:58.049384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:58.860 [2024-12-07 10:43:58.049404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:58.860 [2024-12-07 10:43:58.049431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.860 [2024-12-07 10:43:58.049456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.860 [2024-12-07 10:43:58.049467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:58.860 [2024-12-07 10:43:58.049477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:58.860 [2024-12-07 10:43:58.049487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.860 [2024-12-07 10:43:58.049506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.860 [2024-12-07 10:43:58.049517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:58.860 [2024-12-07 10:43:58.049527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:58.860 [2024-12-07 10:43:58.049537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.860 [2024-12-07 10:43:58.049597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:31:58.860 true 00:31:58.860 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:59.118 { 00:31:59.118 "name": "ftl", 00:31:59.118 "properties": [ 00:31:59.118 { 00:31:59.118 "name": "superblock_version", 00:31:59.118 "value": 5, 00:31:59.118 "read-only": true 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "name": "base_device", 00:31:59.118 "bands": [ 00:31:59.118 { 00:31:59.118 "id": 0, 00:31:59.118 "state": "CLOSED", 00:31:59.118 "validity": 1.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 1, 00:31:59.118 "state": "CLOSED", 00:31:59.118 "validity": 1.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 2, 00:31:59.118 "state": "CLOSED", 00:31:59.118 "validity": 0.007843137254901933 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 3, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 4, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 5, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 6, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 7, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 8, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 9, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 10, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 11, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 12, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 13, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 14, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 15, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 16, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 17, 00:31:59.118 "state": "FREE", 00:31:59.118 "validity": 0.0 00:31:59.118 } 00:31:59.118 ], 00:31:59.118 "read-only": true 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "name": "cache_device", 00:31:59.118 "type": "bdev", 00:31:59.118 "chunks": [ 00:31:59.118 { 00:31:59.118 "id": 0, 00:31:59.118 "state": "INACTIVE", 00:31:59.118 "utilization": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 1, 00:31:59.118 "state": "OPEN", 00:31:59.118 "utilization": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 2, 00:31:59.118 "state": "OPEN", 00:31:59.118 "utilization": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 3, 00:31:59.118 "state": "FREE", 00:31:59.118 "utilization": 0.0 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "id": 4, 00:31:59.118 "state": "FREE", 00:31:59.118 "utilization": 0.0 00:31:59.118 } 00:31:59.118 ], 00:31:59.118 "read-only": true 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "name": "verbose_mode", 00:31:59.118 "value": true, 00:31:59.118 "unit": "", 00:31:59.118 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:59.118 }, 00:31:59.118 { 00:31:59.118 "name": "prep_upgrade_on_shutdown", 00:31:59.118 "value": false, 00:31:59.118 "unit": "", 00:31:59.118 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:59.118 } 00:31:59.118 ] 00:31:59.118 } 00:31:59.118 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:59.118 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:59.118 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:59.378 Validate MD5 checksum, iteration 1 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:59.378 10:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:59.636 [2024-12-07 10:43:58.794493] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:59.637 [2024-12-07 10:43:58.794814] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84274 ] 00:31:59.637 [2024-12-07 10:43:58.979614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.895 [2024-12-07 10:43:59.107630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.800  [2024-12-07T10:44:01.411Z] Copying: 665/1024 [MB] (665 MBps) [2024-12-07T10:44:03.315Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:32:03.962 00:32:03.962 10:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:03.962 10:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:05.341 Validate MD5 checksum, iteration 2 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=abe4d25d72bab128dbdf48da2db3961d 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ abe4d25d72bab128dbdf48da2db3961d != \a\b\e\4\d\2\5\d\7\2\b\a\b\1\2\8\d\b\d\f\4\8\d\a\2\d\b\3\9\6\1\d ]] 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:05.341 10:44:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:05.600 [2024-12-07 10:44:04.751486] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:05.600 [2024-12-07 10:44:04.751761] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84340 ] 00:32:05.600 [2024-12-07 10:44:04.936434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.859 [2024-12-07 10:44:05.062052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.763  [2024-12-07T10:44:07.374Z] Copying: 665/1024 [MB] (665 MBps) [2024-12-07T10:44:10.661Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:32:11.308 00:32:11.308 10:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:11.308 10:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:12.688 10:44:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5662478ea9222ebad12bd48217bd0ea6 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5662478ea9222ebad12bd48217bd0ea6 != \5\6\6\2\4\7\8\e\a\9\2\2\2\e\b\a\d\1\2\b\d\4\8\2\1\7\b\d\0\e\a\6 ]] 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84187 ]] 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84187 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84415 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84415 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84415 ']' 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.688 10:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:12.948 [2024-12-07 10:44:12.107801] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:12.948 [2024-12-07 10:44:12.108691] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84415 ] 00:32:12.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84187 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:12.948 [2024-12-07 10:44:12.288146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.208 [2024-12-07 10:44:12.399769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.148 [2024-12-07 10:44:13.330758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:14.148 [2024-12-07 10:44:13.330828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:14.148 [2024-12-07 10:44:13.477065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.148 [2024-12-07 10:44:13.477223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:14.148 [2024-12-07 10:44:13.477262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:14.148 [2024-12-07 10:44:13.477273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.148 [2024-12-07 10:44:13.477341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.148 [2024-12-07 10:44:13.477353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:14.148 [2024-12-07 10:44:13.477364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:14.148 [2024-12-07 10:44:13.477374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.148 [2024-12-07 10:44:13.477402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:14.148 [2024-12-07 10:44:13.478494] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:14.148 [2024-12-07 10:44:13.478523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.148 [2024-12-07 10:44:13.478534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:14.148 [2024-12-07 10:44:13.478545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:32:14.148 [2024-12-07 10:44:13.478555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.148 [2024-12-07 10:44:13.478910] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:14.410 [2024-12-07 10:44:13.501994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.502034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:14.410 [2024-12-07 10:44:13.502048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.121 ms 00:32:14.410 [2024-12-07 10:44:13.502075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.516177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.516214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:14.410 [2024-12-07 10:44:13.516225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:14.410 [2024-12-07 10:44:13.516235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.516663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.516676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:14.410 [2024-12-07 10:44:13.516687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:32:14.410 [2024-12-07 10:44:13.516696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.516752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.516764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:14.410 [2024-12-07 10:44:13.516774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:32:14.410 [2024-12-07 10:44:13.516783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.516806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.516816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:14.410 [2024-12-07 10:44:13.516826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:14.410 [2024-12-07 10:44:13.516834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.516854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:14.410 [2024-12-07 10:44:13.520720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.520750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:14.410 [2024-12-07 10:44:13.520762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.876 ms 00:32:14.410 [2024-12-07 10:44:13.520771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.520806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.520817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:14.410 [2024-12-07 10:44:13.520827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:14.410 [2024-12-07 10:44:13.520836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.520870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:14.410 [2024-12-07 10:44:13.520892] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:14.410 [2024-12-07 10:44:13.520923] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:14.410 [2024-12-07 10:44:13.520943] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:14.410 [2024-12-07 10:44:13.521224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:14.410 [2024-12-07 10:44:13.521293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:14.410 [2024-12-07 10:44:13.521345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:14.410 [2024-12-07 10:44:13.521395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:14.410 [2024-12-07 10:44:13.521428] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:14.410 [2024-12-07 10:44:13.521440] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:14.410 [2024-12-07 10:44:13.521449] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:14.410 [2024-12-07 10:44:13.521459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:14.410 [2024-12-07 10:44:13.521469] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:14.410 [2024-12-07 10:44:13.521486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.521496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:14.410 [2024-12-07 10:44:13.521507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.619 ms 00:32:14.410 [2024-12-07 10:44:13.521516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.521595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.410 [2024-12-07 10:44:13.521605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:14.410 [2024-12-07 10:44:13.521615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:14.410 [2024-12-07 10:44:13.521625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.410 [2024-12-07 10:44:13.521713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:14.410 [2024-12-07 10:44:13.521730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:14.410 [2024-12-07 10:44:13.521742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:14.410 [2024-12-07 10:44:13.521752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.521762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:14.410 [2024-12-07 10:44:13.521771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.521781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:14.410 [2024-12-07 10:44:13.521790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:14.410 [2024-12-07 10:44:13.521799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:14.410 [2024-12-07 10:44:13.521808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.521818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:14.410 [2024-12-07 10:44:13.521827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:14.410 [2024-12-07 10:44:13.521836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.521846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:14.410 [2024-12-07 10:44:13.521858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:14.410 [2024-12-07 10:44:13.521868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.521877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:14.410 [2024-12-07 10:44:13.521886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:14.410 [2024-12-07 10:44:13.521895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.521905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:14.410 [2024-12-07 10:44:13.521915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:14.410 [2024-12-07 10:44:13.521933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.410 [2024-12-07 10:44:13.521943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:14.410 [2024-12-07 10:44:13.521952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:14.410 [2024-12-07 10:44:13.521962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.410 [2024-12-07 10:44:13.521971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:14.410 [2024-12-07 10:44:13.521981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:14.410 [2024-12-07 10:44:13.522003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.410 [2024-12-07 10:44:13.522013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:14.410 [2024-12-07 10:44:13.522022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:14.410 [2024-12-07 10:44:13.522031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:14.410 [2024-12-07 10:44:13.522041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:14.410 [2024-12-07 10:44:13.522050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:14.410 [2024-12-07 10:44:13.522059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.522069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:14.410 [2024-12-07 10:44:13.522078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:14.410 [2024-12-07 10:44:13.522086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.522095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:14.410 [2024-12-07 10:44:13.522105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:14.410 [2024-12-07 10:44:13.522114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.410 [2024-12-07 10:44:13.522123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:14.410 [2024-12-07 10:44:13.522131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:14.411 [2024-12-07 10:44:13.522140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.411 [2024-12-07 10:44:13.522149] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:14.411 [2024-12-07 10:44:13.522159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:14.411 [2024-12-07 10:44:13.522169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:14.411 [2024-12-07 10:44:13.522179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:14.411 [2024-12-07 10:44:13.522189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:14.411 [2024-12-07 10:44:13.522199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:14.411 [2024-12-07 10:44:13.522208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:14.411 [2024-12-07 10:44:13.522217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:14.411 [2024-12-07 10:44:13.522226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:14.411 [2024-12-07 10:44:13.522236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:14.411 [2024-12-07 10:44:13.522257] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:14.411 [2024-12-07 10:44:13.522270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:14.411 [2024-12-07 10:44:13.522293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:14.411 [2024-12-07 10:44:13.522325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:14.411 [2024-12-07 10:44:13.522336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:14.411 [2024-12-07 10:44:13.522346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:14.411 [2024-12-07 10:44:13.522356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:14.411 [2024-12-07 10:44:13.522427] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:14.411 [2024-12-07 10:44:13.522438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:14.411 [2024-12-07 10:44:13.522463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:14.411 [2024-12-07 10:44:13.522473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:14.411 [2024-12-07 10:44:13.522484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:14.411 [2024-12-07 10:44:13.522495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.522505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:14.411 [2024-12-07 10:44:13.522514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:32:14.411 [2024-12-07 10:44:13.522524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.557344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.557481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:14.411 [2024-12-07 10:44:13.557570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.829 ms 00:32:14.411 [2024-12-07 10:44:13.557605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.557666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.557699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:14.411 [2024-12-07 10:44:13.557729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:14.411 [2024-12-07 10:44:13.557757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.601338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.601473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:14.411 [2024-12-07 10:44:13.601561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.574 ms 00:32:14.411 [2024-12-07 10:44:13.601596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.601654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.601688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:14.411 [2024-12-07 10:44:13.601718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:14.411 [2024-12-07 10:44:13.601753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.601905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.602011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:14.411 [2024-12-07 10:44:13.602045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:32:14.411 [2024-12-07 10:44:13.602075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.602143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.602175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:14.411 [2024-12-07 10:44:13.602260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:14.411 [2024-12-07 10:44:13.602295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.622397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.622544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:14.411 [2024-12-07 10:44:13.622672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.939 ms 00:32:14.411 [2024-12-07 10:44:13.622715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.622856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.622901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:14.411 [2024-12-07 10:44:13.622931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:14.411 [2024-12-07 10:44:13.623028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.670847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.671011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:14.411 [2024-12-07 10:44:13.671138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.829 ms 00:32:14.411 [2024-12-07 10:44:13.671179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.411 [2024-12-07 10:44:13.684757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.411 [2024-12-07 10:44:13.684900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:14.411 [2024-12-07 10:44:13.685063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:32:14.411 [2024-12-07 10:44:13.685101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.672 [2024-12-07 10:44:13.764897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.672 [2024-12-07 10:44:13.765133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:14.672 [2024-12-07 10:44:13.765235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.841 ms 00:32:14.672 [2024-12-07 10:44:13.765272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.672 [2024-12-07 10:44:13.765484] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:14.672 [2024-12-07 10:44:13.765843] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:14.672 [2024-12-07 10:44:13.766094] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:14.672 [2024-12-07 10:44:13.766249] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:14.672 [2024-12-07 10:44:13.766264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.672 [2024-12-07 10:44:13.766274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:14.672 [2024-12-07 10:44:13.766286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.930 ms 00:32:14.672 [2024-12-07 10:44:13.766296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.672 [2024-12-07 10:44:13.766364] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:14.672 [2024-12-07 10:44:13.766379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.672 [2024-12-07 10:44:13.766395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:14.672 [2024-12-07 10:44:13.766407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:14.672 [2024-12-07 10:44:13.766417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.672 [2024-12-07 10:44:13.788238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.672 [2024-12-07 10:44:13.788293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:14.672 [2024-12-07 10:44:13.788307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.810 ms 00:32:14.672 [2024-12-07 10:44:13.788317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.672 [2024-12-07 10:44:13.801210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.672 [2024-12-07 10:44:13.801242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:14.672 [2024-12-07 10:44:13.801254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:14.672 [2024-12-07 10:44:13.801264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:14.672 [2024-12-07 10:44:13.801372] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:14.673 [2024-12-07 10:44:13.801562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:14.673 [2024-12-07 10:44:13.801572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:14.673 [2024-12-07 10:44:13.801582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:32:14.673 [2024-12-07 10:44:13.801591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.243 [2024-12-07 10:44:14.397707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.243 [2024-12-07 10:44:14.397839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:15.243 [2024-12-07 10:44:14.397862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 595.967 ms 00:32:15.243 [2024-12-07 10:44:14.397874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.243 [2024-12-07 10:44:14.403929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.243 [2024-12-07 10:44:14.403973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:15.243 [2024-12-07 10:44:14.403998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.215 ms 00:32:15.243 [2024-12-07 10:44:14.404009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.243 [2024-12-07 10:44:14.404675] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:15.243 [2024-12-07 10:44:14.404718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.243 [2024-12-07 10:44:14.404730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:15.243 [2024-12-07 10:44:14.404742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:32:15.243 [2024-12-07 10:44:14.404752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.243 [2024-12-07 10:44:14.404789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.243 [2024-12-07 10:44:14.404801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:15.243 [2024-12-07 10:44:14.404811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:15.243 [2024-12-07 10:44:14.404827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.243 [2024-12-07 10:44:14.404862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 604.488 ms, result 0 00:32:15.243 [2024-12-07 10:44:14.404905] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:15.243 [2024-12-07 10:44:14.404991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.243 [2024-12-07 10:44:14.405003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:15.243 [2024-12-07 10:44:14.405013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:32:15.243 [2024-12-07 10:44:14.405023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.004340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.004405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:15.814 [2024-12-07 10:44:15.004437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 599.049 ms 00:32:15.814 [2024-12-07 10:44:15.004448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.010292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.010333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:15.814 [2024-12-07 10:44:15.010346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.390 ms 00:32:15.814 [2024-12-07 10:44:15.010356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.010920] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:15.814 [2024-12-07 10:44:15.010946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.010957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:15.814 [2024-12-07 10:44:15.010967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:32:15.814 [2024-12-07 10:44:15.010989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.011022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.011033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:15.814 [2024-12-07 10:44:15.011044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:15.814 [2024-12-07 10:44:15.011053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.011089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 607.167 ms, result 0 00:32:15.814 [2024-12-07 10:44:15.011132] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:15.814 [2024-12-07 10:44:15.011145] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:15.814 [2024-12-07 10:44:15.011158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.011168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:15.814 [2024-12-07 10:44:15.011178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1211.792 ms 00:32:15.814 [2024-12-07 10:44:15.011188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.011219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.011234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:15.814 [2024-12-07 10:44:15.011244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:15.814 [2024-12-07 10:44:15.011254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.022286] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:15.814 [2024-12-07 10:44:15.022417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.022431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:15.814 [2024-12-07 10:44:15.022442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.164 ms 00:32:15.814 [2024-12-07 10:44:15.022452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.023078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.023100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:15.814 [2024-12-07 10:44:15.023116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:32:15.814 [2024-12-07 10:44:15.023126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.025179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.025202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:15.814 [2024-12-07 10:44:15.025213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.037 ms 00:32:15.814 [2024-12-07 10:44:15.025223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.025281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.025299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:15.814 [2024-12-07 10:44:15.025310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:15.814 [2024-12-07 10:44:15.025325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.025431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.025443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:15.814 [2024-12-07 10:44:15.025454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:15.814 [2024-12-07 10:44:15.025464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.025483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.025493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:15.814 [2024-12-07 10:44:15.025503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:15.814 [2024-12-07 10:44:15.025513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.025550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:15.814 [2024-12-07 10:44:15.025562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.025572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:15.814 [2024-12-07 10:44:15.025582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:15.814 [2024-12-07 10:44:15.025592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.025638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.814 [2024-12-07 10:44:15.025649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:15.814 [2024-12-07 10:44:15.025659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:15.814 [2024-12-07 10:44:15.025669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.814 [2024-12-07 10:44:15.026623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1551.622 ms, result 0 00:32:15.814 [2024-12-07 10:44:15.038953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.814 [2024-12-07 10:44:15.054929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:15.814 [2024-12-07 10:44:15.064144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:15.814 Validate MD5 checksum, iteration 1 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:15.814 10:44:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:16.073 [2024-12-07 10:44:15.207005] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:16.074 [2024-12-07 10:44:15.207273] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84469 ] 00:32:16.074 [2024-12-07 10:44:15.388943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.332 [2024-12-07 10:44:15.499092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.234  [2024-12-07T10:44:17.844Z] Copying: 710/1024 [MB] (710 MBps) [2024-12-07T10:44:19.218Z] Copying: 1024/1024 [MB] (average 680 MBps) 00:32:19.865 00:32:19.865 10:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:19.865 10:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=abe4d25d72bab128dbdf48da2db3961d 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ abe4d25d72bab128dbdf48da2db3961d != \a\b\e\4\d\2\5\d\7\2\b\a\b\1\2\8\d\b\d\f\4\8\d\a\2\d\b\3\9\6\1\d ]] 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:21.766 Validate MD5 checksum, iteration 2 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:21.766 10:44:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:21.766 [2024-12-07 10:44:20.934431] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:21.766 [2024-12-07 10:44:20.934749] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84538 ] 00:32:21.766 [2024-12-07 10:44:21.113027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.024 [2024-12-07 10:44:21.245848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.928  [2024-12-07T10:44:23.847Z] Copying: 565/1024 [MB] (565 MBps) [2024-12-07T10:44:25.228Z] Copying: 1024/1024 [MB] (average 566 MBps) 00:32:25.875 00:32:25.875 10:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:25.875 10:44:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5662478ea9222ebad12bd48217bd0ea6 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5662478ea9222ebad12bd48217bd0ea6 != \5\6\6\2\4\7\8\e\a\9\2\2\2\e\b\a\d\1\2\b\d\4\8\2\1\7\b\d\0\e\a\6 ]] 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84415 ]] 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84415 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84415 ']' 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84415 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84415 00:32:27.802 killing process with pid 84415 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84415' 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84415 00:32:27.802 10:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84415 00:32:29.183 [2024-12-07 10:44:28.123068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:29.183 [2024-12-07 10:44:28.143523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.143575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:29.183 [2024-12-07 10:44:28.143595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:29.183 [2024-12-07 10:44:28.143608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.143635] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:29.183 [2024-12-07 10:44:28.148162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.148212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:29.183 [2024-12-07 10:44:28.148227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.512 ms 00:32:29.183 [2024-12-07 10:44:28.148239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.148472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.148487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:29.183 [2024-12-07 10:44:28.148500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:32:29.183 [2024-12-07 10:44:28.148512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.149759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.149801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:29.183 [2024-12-07 10:44:28.149816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.228 ms 00:32:29.183 [2024-12-07 10:44:28.149834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.150737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.150769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:29.183 [2024-12-07 10:44:28.150783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.864 ms 00:32:29.183 [2024-12-07 10:44:28.150795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.165087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.165266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:29.183 [2024-12-07 10:44:28.165299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.270 ms 00:32:29.183 [2024-12-07 10:44:28.165312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.173180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.173223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:29.183 [2024-12-07 10:44:28.173240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.836 ms 00:32:29.183 [2024-12-07 10:44:28.173251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.173347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.173361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:29.183 [2024-12-07 10:44:28.173374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:29.183 [2024-12-07 10:44:28.173393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.187585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.187624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:29.183 [2024-12-07 10:44:28.187638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.182 ms 00:32:29.183 [2024-12-07 10:44:28.187649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.201607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.201645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:29.183 [2024-12-07 10:44:28.201659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.940 ms 00:32:29.183 [2024-12-07 10:44:28.201670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.215585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.215742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:29.183 [2024-12-07 10:44:28.215782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.898 ms 00:32:29.183 [2024-12-07 10:44:28.215794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.229668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.183 [2024-12-07 10:44:28.229707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:29.183 [2024-12-07 10:44:28.229720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.786 ms 00:32:29.183 [2024-12-07 10:44:28.229731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.183 [2024-12-07 10:44:28.229771] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:29.183 [2024-12-07 10:44:28.229789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:29.183 [2024-12-07 10:44:28.229803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:29.183 [2024-12-07 10:44:28.229815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:29.183 [2024-12-07 10:44:28.229828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:29.183 [2024-12-07 10:44:28.229958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:29.184 [2024-12-07 10:44:28.229970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:29.184 [2024-12-07 10:44:28.229995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:29.184 [2024-12-07 10:44:28.230007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:29.184 [2024-12-07 10:44:28.230021] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:29.184 [2024-12-07 10:44:28.230032] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4ca5ef52-9281-4b28-a259-52fb41b59830 00:32:29.184 [2024-12-07 10:44:28.230045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:29.184 [2024-12-07 10:44:28.230056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:29.184 [2024-12-07 10:44:28.230067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:29.184 [2024-12-07 10:44:28.230079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:29.184 [2024-12-07 10:44:28.230089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:29.184 [2024-12-07 10:44:28.230101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:29.184 [2024-12-07 10:44:28.230137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:29.184 [2024-12-07 10:44:28.230147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:29.184 [2024-12-07 10:44:28.230157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:29.184 [2024-12-07 10:44:28.230171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.184 [2024-12-07 10:44:28.230183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:29.184 [2024-12-07 10:44:28.230199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:32:29.184 [2024-12-07 10:44:28.230211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.249492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.184 [2024-12-07 10:44:28.249657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:29.184 [2024-12-07 10:44:28.249680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.276 ms 00:32:29.184 [2024-12-07 10:44:28.249693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.250303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.184 [2024-12-07 10:44:28.250318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:29.184 [2024-12-07 10:44:28.250332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:32:29.184 [2024-12-07 10:44:28.250344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.314538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.184 [2024-12-07 10:44:28.314583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:29.184 [2024-12-07 10:44:28.314599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.184 [2024-12-07 10:44:28.314618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.314664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.184 [2024-12-07 10:44:28.314677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:29.184 [2024-12-07 10:44:28.314689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.184 [2024-12-07 10:44:28.314700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.314811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.184 [2024-12-07 10:44:28.314827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:29.184 [2024-12-07 10:44:28.314840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.184 [2024-12-07 10:44:28.314853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.314883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.184 [2024-12-07 10:44:28.314896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:29.184 [2024-12-07 10:44:28.314909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.184 [2024-12-07 10:44:28.314920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.184 [2024-12-07 10:44:28.442703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.184 [2024-12-07 10:44:28.442764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:29.184 [2024-12-07 10:44:28.442784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.184 [2024-12-07 10:44:28.442796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.544264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.544477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:29.444 [2024-12-07 10:44:28.544505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.544518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.544688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.544702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:29.444 [2024-12-07 10:44:28.544716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.544728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.544793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.544826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:29.444 [2024-12-07 10:44:28.544840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.544852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.545017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.545033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:29.444 [2024-12-07 10:44:28.545046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.545059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.545114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.545129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:29.444 [2024-12-07 10:44:28.545148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.545159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.545211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.545224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:29.444 [2024-12-07 10:44:28.545237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.545249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.545310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:29.444 [2024-12-07 10:44:28.545330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:29.444 [2024-12-07 10:44:28.545343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:29.444 [2024-12-07 10:44:28.545355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.444 [2024-12-07 10:44:28.545517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 402.595 ms, result 0 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:30.827 Remove shared memory files 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84187 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:30.827 ************************************ 00:32:30.827 END TEST ftl_upgrade_shutdown 00:32:30.827 ************************************ 00:32:30.827 00:32:30.827 real 1m27.584s 00:32:30.827 user 1m57.648s 00:32:30.827 sys 0m24.680s 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.827 10:44:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:30.827 10:44:29 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:32:30.827 10:44:29 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:30.827 10:44:29 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:30.827 10:44:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.827 10:44:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:30.827 ************************************ 00:32:30.827 START TEST ftl_restore_fast 00:32:30.827 ************************************ 00:32:30.827 10:44:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:30.827 * Looking for test storage... 00:32:30.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.827 --rc genhtml_branch_coverage=1 00:32:30.827 --rc genhtml_function_coverage=1 00:32:30.827 --rc genhtml_legend=1 00:32:30.827 --rc geninfo_all_blocks=1 00:32:30.827 --rc geninfo_unexecuted_blocks=1 00:32:30.827 00:32:30.827 ' 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.827 --rc genhtml_branch_coverage=1 00:32:30.827 --rc genhtml_function_coverage=1 00:32:30.827 --rc genhtml_legend=1 00:32:30.827 --rc geninfo_all_blocks=1 00:32:30.827 --rc geninfo_unexecuted_blocks=1 00:32:30.827 00:32:30.827 ' 00:32:30.827 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.828 --rc genhtml_branch_coverage=1 00:32:30.828 --rc genhtml_function_coverage=1 00:32:30.828 --rc genhtml_legend=1 00:32:30.828 --rc geninfo_all_blocks=1 00:32:30.828 --rc geninfo_unexecuted_blocks=1 00:32:30.828 00:32:30.828 ' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.828 --rc genhtml_branch_coverage=1 00:32:30.828 --rc genhtml_function_coverage=1 00:32:30.828 --rc genhtml_legend=1 00:32:30.828 --rc geninfo_all_blocks=1 00:32:30.828 --rc geninfo_unexecuted_blocks=1 00:32:30.828 00:32:30.828 ' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:30.828 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.LCwjQmUWge 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84708 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84708 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84708 ']' 00:32:31.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.089 10:44:30 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:31.089 [2024-12-07 10:44:30.303808] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:31.089 [2024-12-07 10:44:30.303939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84708 ] 00:32:31.349 [2024-12-07 10:44:30.489526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.349 [2024-12-07 10:44:30.594990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:32:32.288 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:32.548 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:32.807 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:32.807 { 00:32:32.807 "name": "nvme0n1", 00:32:32.807 "aliases": [ 00:32:32.807 "ede1fffa-59b4-4da9-b57c-62690b8789d1" 00:32:32.807 ], 00:32:32.807 "product_name": "NVMe disk", 00:32:32.807 "block_size": 4096, 00:32:32.807 "num_blocks": 1310720, 00:32:32.807 "uuid": "ede1fffa-59b4-4da9-b57c-62690b8789d1", 00:32:32.807 "numa_id": -1, 00:32:32.807 "assigned_rate_limits": { 00:32:32.807 "rw_ios_per_sec": 0, 00:32:32.807 "rw_mbytes_per_sec": 0, 00:32:32.807 "r_mbytes_per_sec": 0, 00:32:32.807 "w_mbytes_per_sec": 0 00:32:32.807 }, 00:32:32.807 "claimed": true, 00:32:32.807 "claim_type": "read_many_write_one", 00:32:32.807 "zoned": false, 00:32:32.807 "supported_io_types": { 00:32:32.807 "read": true, 00:32:32.807 "write": true, 00:32:32.807 "unmap": true, 00:32:32.807 "flush": true, 00:32:32.807 "reset": true, 00:32:32.807 "nvme_admin": true, 00:32:32.807 "nvme_io": true, 00:32:32.807 "nvme_io_md": false, 00:32:32.807 "write_zeroes": true, 00:32:32.807 "zcopy": false, 00:32:32.807 "get_zone_info": false, 00:32:32.807 "zone_management": false, 00:32:32.807 "zone_append": false, 00:32:32.807 "compare": true, 00:32:32.807 "compare_and_write": false, 00:32:32.807 "abort": true, 00:32:32.807 "seek_hole": false, 00:32:32.807 "seek_data": false, 00:32:32.807 "copy": true, 00:32:32.807 "nvme_iov_md": false 00:32:32.807 }, 00:32:32.807 "driver_specific": { 00:32:32.807 "nvme": [ 00:32:32.807 { 00:32:32.807 "pci_address": "0000:00:11.0", 00:32:32.807 "trid": { 00:32:32.807 "trtype": "PCIe", 00:32:32.807 "traddr": "0000:00:11.0" 00:32:32.807 }, 00:32:32.807 "ctrlr_data": { 00:32:32.807 "cntlid": 0, 00:32:32.807 "vendor_id": "0x1b36", 00:32:32.807 "model_number": "QEMU NVMe Ctrl", 00:32:32.807 "serial_number": "12341", 00:32:32.807 "firmware_revision": "8.0.0", 00:32:32.807 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:32.807 "oacs": { 00:32:32.807 "security": 0, 00:32:32.807 "format": 1, 00:32:32.807 "firmware": 0, 00:32:32.807 "ns_manage": 1 00:32:32.807 }, 00:32:32.807 "multi_ctrlr": false, 00:32:32.807 "ana_reporting": false 00:32:32.807 }, 00:32:32.807 "vs": { 00:32:32.807 "nvme_version": "1.4" 00:32:32.807 }, 00:32:32.807 "ns_data": { 00:32:32.807 "id": 1, 00:32:32.807 "can_share": false 00:32:32.807 } 00:32:32.807 } 00:32:32.807 ], 00:32:32.807 "mp_policy": "active_passive" 00:32:32.807 } 00:32:32.807 } 00:32:32.807 ]' 00:32:32.807 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:32.807 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:32.807 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:32.807 10:44:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:32.807 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:32.807 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:32:32.807 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:32:32.807 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:32.807 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:32:32.807 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:32.808 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:33.066 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=95e8261b-9ee3-4a8f-a9a9-d453a0e9d640 00:32:33.066 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:32:33.066 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95e8261b-9ee3-4a8f-a9a9-d453a0e9d640 00:32:33.325 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:33.325 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2 00:32:33.325 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2 00:32:33.584 10:44:32 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:33.585 10:44:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:33.844 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:33.844 { 00:32:33.844 "name": "3863a3c3-92ae-4499-b453-f037dcc99b93", 00:32:33.844 "aliases": [ 00:32:33.844 "lvs/nvme0n1p0" 00:32:33.844 ], 00:32:33.844 "product_name": "Logical Volume", 00:32:33.844 "block_size": 4096, 00:32:33.844 "num_blocks": 26476544, 00:32:33.844 "uuid": "3863a3c3-92ae-4499-b453-f037dcc99b93", 00:32:33.844 "assigned_rate_limits": { 00:32:33.844 "rw_ios_per_sec": 0, 00:32:33.844 "rw_mbytes_per_sec": 0, 00:32:33.844 "r_mbytes_per_sec": 0, 00:32:33.844 "w_mbytes_per_sec": 0 00:32:33.844 }, 00:32:33.844 "claimed": false, 00:32:33.844 "zoned": false, 00:32:33.844 "supported_io_types": { 00:32:33.844 "read": true, 00:32:33.844 "write": true, 00:32:33.844 "unmap": true, 00:32:33.844 "flush": false, 00:32:33.844 "reset": true, 00:32:33.844 "nvme_admin": false, 00:32:33.844 "nvme_io": false, 00:32:33.844 "nvme_io_md": false, 00:32:33.844 "write_zeroes": true, 00:32:33.844 "zcopy": false, 00:32:33.844 "get_zone_info": false, 00:32:33.844 "zone_management": false, 00:32:33.844 "zone_append": false, 00:32:33.845 "compare": false, 00:32:33.845 "compare_and_write": false, 00:32:33.845 "abort": false, 00:32:33.845 "seek_hole": true, 00:32:33.845 "seek_data": true, 00:32:33.845 "copy": false, 00:32:33.845 "nvme_iov_md": false 00:32:33.845 }, 00:32:33.845 "driver_specific": { 00:32:33.845 "lvol": { 00:32:33.845 "lvol_store_uuid": "a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2", 00:32:33.845 "base_bdev": "nvme0n1", 00:32:33.845 "thin_provision": true, 00:32:33.845 "num_allocated_clusters": 0, 00:32:33.845 "snapshot": false, 00:32:33.845 "clone": false, 00:32:33.845 "esnap_clone": false 00:32:33.845 } 00:32:33.845 } 00:32:33.845 } 00:32:33.845 ]' 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:32:33.845 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:32:34.104 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:34.364 { 00:32:34.364 "name": "3863a3c3-92ae-4499-b453-f037dcc99b93", 00:32:34.364 "aliases": [ 00:32:34.364 "lvs/nvme0n1p0" 00:32:34.364 ], 00:32:34.364 "product_name": "Logical Volume", 00:32:34.364 "block_size": 4096, 00:32:34.364 "num_blocks": 26476544, 00:32:34.364 "uuid": "3863a3c3-92ae-4499-b453-f037dcc99b93", 00:32:34.364 "assigned_rate_limits": { 00:32:34.364 "rw_ios_per_sec": 0, 00:32:34.364 "rw_mbytes_per_sec": 0, 00:32:34.364 "r_mbytes_per_sec": 0, 00:32:34.364 "w_mbytes_per_sec": 0 00:32:34.364 }, 00:32:34.364 "claimed": false, 00:32:34.364 "zoned": false, 00:32:34.364 "supported_io_types": { 00:32:34.364 "read": true, 00:32:34.364 "write": true, 00:32:34.364 "unmap": true, 00:32:34.364 "flush": false, 00:32:34.364 "reset": true, 00:32:34.364 "nvme_admin": false, 00:32:34.364 "nvme_io": false, 00:32:34.364 "nvme_io_md": false, 00:32:34.364 "write_zeroes": true, 00:32:34.364 "zcopy": false, 00:32:34.364 "get_zone_info": false, 00:32:34.364 "zone_management": false, 00:32:34.364 "zone_append": false, 00:32:34.364 "compare": false, 00:32:34.364 "compare_and_write": false, 00:32:34.364 "abort": false, 00:32:34.364 "seek_hole": true, 00:32:34.364 "seek_data": true, 00:32:34.364 "copy": false, 00:32:34.364 "nvme_iov_md": false 00:32:34.364 }, 00:32:34.364 "driver_specific": { 00:32:34.364 "lvol": { 00:32:34.364 "lvol_store_uuid": "a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2", 00:32:34.364 "base_bdev": "nvme0n1", 00:32:34.364 "thin_provision": true, 00:32:34.364 "num_allocated_clusters": 0, 00:32:34.364 "snapshot": false, 00:32:34.364 "clone": false, 00:32:34.364 "esnap_clone": false 00:32:34.364 } 00:32:34.364 } 00:32:34.364 } 00:32:34.364 ]' 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:34.364 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:34.624 10:44:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3863a3c3-92ae-4499-b453-f037dcc99b93 00:32:34.883 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:34.883 { 00:32:34.883 "name": "3863a3c3-92ae-4499-b453-f037dcc99b93", 00:32:34.883 "aliases": [ 00:32:34.883 "lvs/nvme0n1p0" 00:32:34.883 ], 00:32:34.883 "product_name": "Logical Volume", 00:32:34.883 "block_size": 4096, 00:32:34.883 "num_blocks": 26476544, 00:32:34.883 "uuid": "3863a3c3-92ae-4499-b453-f037dcc99b93", 00:32:34.883 "assigned_rate_limits": { 00:32:34.883 "rw_ios_per_sec": 0, 00:32:34.883 "rw_mbytes_per_sec": 0, 00:32:34.883 "r_mbytes_per_sec": 0, 00:32:34.883 "w_mbytes_per_sec": 0 00:32:34.883 }, 00:32:34.883 "claimed": false, 00:32:34.883 "zoned": false, 00:32:34.883 "supported_io_types": { 00:32:34.883 "read": true, 00:32:34.883 "write": true, 00:32:34.883 "unmap": true, 00:32:34.883 "flush": false, 00:32:34.883 "reset": true, 00:32:34.883 "nvme_admin": false, 00:32:34.883 "nvme_io": false, 00:32:34.883 "nvme_io_md": false, 00:32:34.883 "write_zeroes": true, 00:32:34.883 "zcopy": false, 00:32:34.883 "get_zone_info": false, 00:32:34.883 "zone_management": false, 00:32:34.883 "zone_append": false, 00:32:34.883 "compare": false, 00:32:34.883 "compare_and_write": false, 00:32:34.883 "abort": false, 00:32:34.883 "seek_hole": true, 00:32:34.883 "seek_data": true, 00:32:34.883 "copy": false, 00:32:34.883 "nvme_iov_md": false 00:32:34.883 }, 00:32:34.883 "driver_specific": { 00:32:34.883 "lvol": { 00:32:34.883 "lvol_store_uuid": "a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2", 00:32:34.883 "base_bdev": "nvme0n1", 00:32:34.883 "thin_provision": true, 00:32:34.883 "num_allocated_clusters": 0, 00:32:34.883 "snapshot": false, 00:32:34.883 "clone": false, 00:32:34.883 "esnap_clone": false 00:32:34.883 } 00:32:34.883 } 00:32:34.883 } 00:32:34.883 ]' 00:32:34.883 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:34.883 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:34.883 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:35.144 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:35.144 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:35.144 10:44:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:35.144 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3863a3c3-92ae-4499-b453-f037dcc99b93 --l2p_dram_limit 10' 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:32:35.145 10:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3863a3c3-92ae-4499-b453-f037dcc99b93 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:32:35.145 [2024-12-07 10:44:34.422285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.422436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:35.145 [2024-12-07 10:44:34.422521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:35.145 [2024-12-07 10:44:34.422556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.422657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.422711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.145 [2024-12-07 10:44:34.422744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:35.145 [2024-12-07 10:44:34.422774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.422892] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:35.145 [2024-12-07 10:44:34.424039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:35.145 [2024-12-07 10:44:34.424216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.424280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.145 [2024-12-07 10:44:34.424321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:32:35.145 [2024-12-07 10:44:34.424474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.424648] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1550b398-958a-49a9-bb53-5ab7cdf56510 00:32:35.145 [2024-12-07 10:44:34.426201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.426339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:35.145 [2024-12-07 10:44:34.426423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:35.145 [2024-12-07 10:44:34.426463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.434104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.434255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.145 [2024-12-07 10:44:34.434361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.557 ms 00:32:35.145 [2024-12-07 10:44:34.434401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.434518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.434557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.145 [2024-12-07 10:44:34.434588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:35.145 [2024-12-07 10:44:34.434694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.434768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.434805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:35.145 [2024-12-07 10:44:34.434839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:35.145 [2024-12-07 10:44:34.434871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.434913] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:35.145 [2024-12-07 10:44:34.439963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.440115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.145 [2024-12-07 10:44:34.440230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.061 ms 00:32:35.145 [2024-12-07 10:44:34.440274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.440333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.440364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:35.145 [2024-12-07 10:44:34.440395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:35.145 [2024-12-07 10:44:34.440423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.440491] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:35.145 [2024-12-07 10:44:34.440814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:35.145 [2024-12-07 10:44:34.440916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:35.145 [2024-12-07 10:44:34.440932] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:35.145 [2024-12-07 10:44:34.440948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:35.145 [2024-12-07 10:44:34.440960] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:35.145 [2024-12-07 10:44:34.440974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:35.145 [2024-12-07 10:44:34.441003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:35.145 [2024-12-07 10:44:34.441017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:35.145 [2024-12-07 10:44:34.441027] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:35.145 [2024-12-07 10:44:34.441041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.441060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:35.145 [2024-12-07 10:44:34.441073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:32:35.145 [2024-12-07 10:44:34.441083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.441164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.145 [2024-12-07 10:44:34.441175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:35.145 [2024-12-07 10:44:34.441187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:35.145 [2024-12-07 10:44:34.441199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.145 [2024-12-07 10:44:34.441286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:35.145 [2024-12-07 10:44:34.441299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:35.145 [2024-12-07 10:44:34.441312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:35.145 [2024-12-07 10:44:34.441343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:35.145 [2024-12-07 10:44:34.441379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:35.145 [2024-12-07 10:44:34.441402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:35.145 [2024-12-07 10:44:34.441411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:35.145 [2024-12-07 10:44:34.441423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:35.145 [2024-12-07 10:44:34.441432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:35.145 [2024-12-07 10:44:34.441443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:35.145 [2024-12-07 10:44:34.441451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:35.145 [2024-12-07 10:44:34.441473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:35.145 [2024-12-07 10:44:34.441504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:35.145 [2024-12-07 10:44:34.441532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:35.145 [2024-12-07 10:44:34.441562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:35.145 [2024-12-07 10:44:34.441591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.145 [2024-12-07 10:44:34.441610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:35.145 [2024-12-07 10:44:34.441623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:35.145 [2024-12-07 10:44:34.441632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:35.145 [2024-12-07 10:44:34.441642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:35.145 [2024-12-07 10:44:34.441651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:35.145 [2024-12-07 10:44:34.441663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:35.145 [2024-12-07 10:44:34.441672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:35.146 [2024-12-07 10:44:34.441683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:35.146 [2024-12-07 10:44:34.441694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.146 [2024-12-07 10:44:34.441705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:35.146 [2024-12-07 10:44:34.441715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:35.146 [2024-12-07 10:44:34.441725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.146 [2024-12-07 10:44:34.441734] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:35.146 [2024-12-07 10:44:34.441746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:35.146 [2024-12-07 10:44:34.441756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:35.146 [2024-12-07 10:44:34.441768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.146 [2024-12-07 10:44:34.441777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:35.146 [2024-12-07 10:44:34.441791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:35.146 [2024-12-07 10:44:34.441800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:35.146 [2024-12-07 10:44:34.441811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:35.146 [2024-12-07 10:44:34.441820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:35.146 [2024-12-07 10:44:34.441831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:35.146 [2024-12-07 10:44:34.441843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:35.146 [2024-12-07 10:44:34.441860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.441871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:35.146 [2024-12-07 10:44:34.441884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:35.146 [2024-12-07 10:44:34.441894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:35.146 [2024-12-07 10:44:34.441906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:35.146 [2024-12-07 10:44:34.441916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:35.146 [2024-12-07 10:44:34.441928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:35.146 [2024-12-07 10:44:34.441938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:35.146 [2024-12-07 10:44:34.441951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:35.146 [2024-12-07 10:44:34.441961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:35.146 [2024-12-07 10:44:34.442172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.442252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.442384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.442434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.442520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:35.146 [2024-12-07 10:44:34.442568] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:35.146 [2024-12-07 10:44:34.442659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.442729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:35.146 [2024-12-07 10:44:34.442779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:35.146 [2024-12-07 10:44:34.442868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:35.146 [2024-12-07 10:44:34.442921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:35.146 [2024-12-07 10:44:34.442969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.146 [2024-12-07 10:44:34.443016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:35.146 [2024-12-07 10:44:34.443048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:32:35.146 [2024-12-07 10:44:34.443219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.146 [2024-12-07 10:44:34.443292] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:35.146 [2024-12-07 10:44:34.443350] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:39.339 [2024-12-07 10:44:38.275420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.275663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:39.339 [2024-12-07 10:44:38.275689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3838.349 ms 00:32:39.339 [2024-12-07 10:44:38.275702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.313432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.313639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:39.339 [2024-12-07 10:44:38.313664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.458 ms 00:32:39.339 [2024-12-07 10:44:38.313678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.313806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.313822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:39.339 [2024-12-07 10:44:38.313837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:39.339 [2024-12-07 10:44:38.313852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.356998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.357042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.339 [2024-12-07 10:44:38.357056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.160 ms 00:32:39.339 [2024-12-07 10:44:38.357069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.357107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.357120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:39.339 [2024-12-07 10:44:38.357130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:39.339 [2024-12-07 10:44:38.357152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.357626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.357652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:39.339 [2024-12-07 10:44:38.357663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:32:39.339 [2024-12-07 10:44:38.357675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.357766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.357782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:39.339 [2024-12-07 10:44:38.357791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:32:39.339 [2024-12-07 10:44:38.357805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.378494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.378537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:39.339 [2024-12-07 10:44:38.378555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.703 ms 00:32:39.339 [2024-12-07 10:44:38.378568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.417909] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:39.339 [2024-12-07 10:44:38.422116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.422150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:39.339 [2024-12-07 10:44:38.422169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.520 ms 00:32:39.339 [2024-12-07 10:44:38.422182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.520966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.521027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:39.339 [2024-12-07 10:44:38.521046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.903 ms 00:32:39.339 [2024-12-07 10:44:38.521056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.521250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.521265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:39.339 [2024-12-07 10:44:38.521281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:32:39.339 [2024-12-07 10:44:38.521290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.556194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.556230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:39.339 [2024-12-07 10:44:38.556256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.906 ms 00:32:39.339 [2024-12-07 10:44:38.556270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.590640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.590777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:39.339 [2024-12-07 10:44:38.590820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.371 ms 00:32:39.339 [2024-12-07 10:44:38.590830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.591533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.591552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:39.339 [2024-12-07 10:44:38.591569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:32:39.339 [2024-12-07 10:44:38.591579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.339 [2024-12-07 10:44:38.690313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.339 [2024-12-07 10:44:38.690356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:39.339 [2024-12-07 10:44:38.690378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.835 ms 00:32:39.339 [2024-12-07 10:44:38.690389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.600 [2024-12-07 10:44:38.726878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.600 [2024-12-07 10:44:38.726917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:39.600 [2024-12-07 10:44:38.726933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.462 ms 00:32:39.600 [2024-12-07 10:44:38.726943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.600 [2024-12-07 10:44:38.760497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.600 [2024-12-07 10:44:38.760535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:39.600 [2024-12-07 10:44:38.760550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.553 ms 00:32:39.600 [2024-12-07 10:44:38.760560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.600 [2024-12-07 10:44:38.795179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.600 [2024-12-07 10:44:38.795216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:39.600 [2024-12-07 10:44:38.795234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.632 ms 00:32:39.600 [2024-12-07 10:44:38.795244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.600 [2024-12-07 10:44:38.797411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.600 [2024-12-07 10:44:38.797434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:39.600 [2024-12-07 10:44:38.797450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:39.600 [2024-12-07 10:44:38.797460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.600 [2024-12-07 10:44:38.797625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.600 [2024-12-07 10:44:38.797645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:39.600 [2024-12-07 10:44:38.797658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:39.600 [2024-12-07 10:44:38.797668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.600 [2024-12-07 10:44:38.798707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4383.078 ms, result 0 00:32:39.600 { 00:32:39.600 "name": "ftl0", 00:32:39.600 "uuid": "1550b398-958a-49a9-bb53-5ab7cdf56510" 00:32:39.600 } 00:32:39.600 10:44:38 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:39.600 10:44:38 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:39.859 10:44:39 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:39.859 10:44:39 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:40.119 [2024-12-07 10:44:39.237268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.237436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:40.120 [2024-12-07 10:44:39.237457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:40.120 [2024-12-07 10:44:39.237470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.237501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:40.120 [2024-12-07 10:44:39.241618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.241649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:40.120 [2024-12-07 10:44:39.241663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:32:40.120 [2024-12-07 10:44:39.241673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.241902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.241915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:40.120 [2024-12-07 10:44:39.241927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:32:40.120 [2024-12-07 10:44:39.241937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.244364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.244501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:40.120 [2024-12-07 10:44:39.244524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:32:40.120 [2024-12-07 10:44:39.244535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.249276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.249312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:40.120 [2024-12-07 10:44:39.249325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.719 ms 00:32:40.120 [2024-12-07 10:44:39.249350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.284098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.284136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:40.120 [2024-12-07 10:44:39.284152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.734 ms 00:32:40.120 [2024-12-07 10:44:39.284161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.305343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.305379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:40.120 [2024-12-07 10:44:39.305395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.169 ms 00:32:40.120 [2024-12-07 10:44:39.305405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.305547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.305560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:40.120 [2024-12-07 10:44:39.305572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:32:40.120 [2024-12-07 10:44:39.305584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.340259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.340294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:40.120 [2024-12-07 10:44:39.340309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.709 ms 00:32:40.120 [2024-12-07 10:44:39.340318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.374397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.374441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:40.120 [2024-12-07 10:44:39.374457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.090 ms 00:32:40.120 [2024-12-07 10:44:39.374483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.408156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.408286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:40.120 [2024-12-07 10:44:39.408326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.681 ms 00:32:40.120 [2024-12-07 10:44:39.408336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.441844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.120 [2024-12-07 10:44:39.441879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:40.120 [2024-12-07 10:44:39.441894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.445 ms 00:32:40.120 [2024-12-07 10:44:39.441903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.120 [2024-12-07 10:44:39.441944] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:40.120 [2024-12-07 10:44:39.441962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.441989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:40.120 [2024-12-07 10:44:39.442549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.442991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:40.121 [2024-12-07 10:44:39.443250] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:40.121 [2024-12-07 10:44:39.443262] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1550b398-958a-49a9-bb53-5ab7cdf56510 00:32:40.121 [2024-12-07 10:44:39.443273] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:40.121 [2024-12-07 10:44:39.443291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:40.121 [2024-12-07 10:44:39.443301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:40.121 [2024-12-07 10:44:39.443313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:40.121 [2024-12-07 10:44:39.443323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:40.121 [2024-12-07 10:44:39.443336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:40.121 [2024-12-07 10:44:39.443345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:40.121 [2024-12-07 10:44:39.443357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:40.121 [2024-12-07 10:44:39.443366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:40.121 [2024-12-07 10:44:39.443378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.121 [2024-12-07 10:44:39.443388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:40.121 [2024-12-07 10:44:39.443401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:32:40.121 [2024-12-07 10:44:39.443413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.121 [2024-12-07 10:44:39.462063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.121 [2024-12-07 10:44:39.462205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:40.121 [2024-12-07 10:44:39.462229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.624 ms 00:32:40.121 [2024-12-07 10:44:39.462239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.121 [2024-12-07 10:44:39.462781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.121 [2024-12-07 10:44:39.462799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:40.121 [2024-12-07 10:44:39.462812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:32:40.121 [2024-12-07 10:44:39.462821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.381 [2024-12-07 10:44:39.526136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.381 [2024-12-07 10:44:39.526175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:40.381 [2024-12-07 10:44:39.526189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.381 [2024-12-07 10:44:39.526216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.381 [2024-12-07 10:44:39.526273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.381 [2024-12-07 10:44:39.526287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:40.381 [2024-12-07 10:44:39.526300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.381 [2024-12-07 10:44:39.526310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.381 [2024-12-07 10:44:39.526400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.381 [2024-12-07 10:44:39.526413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:40.381 [2024-12-07 10:44:39.526426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.381 [2024-12-07 10:44:39.526436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.381 [2024-12-07 10:44:39.526458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.381 [2024-12-07 10:44:39.526468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:40.381 [2024-12-07 10:44:39.526483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.381 [2024-12-07 10:44:39.526492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.381 [2024-12-07 10:44:39.645545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.381 [2024-12-07 10:44:39.645723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:40.381 [2024-12-07 10:44:39.645840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.381 [2024-12-07 10:44:39.645878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.743025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.743176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:40.640 [2024-12-07 10:44:39.743316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.743354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.743491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.743585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:40.640 [2024-12-07 10:44:39.743627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.743657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.743806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.743896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:40.640 [2024-12-07 10:44:39.743997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.744016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.744140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.744153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:40.640 [2024-12-07 10:44:39.744166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.744177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.744219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.744231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:40.640 [2024-12-07 10:44:39.744244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.744254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.744299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.744310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:40.640 [2024-12-07 10:44:39.744323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.744333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.744380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.640 [2024-12-07 10:44:39.744392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:40.640 [2024-12-07 10:44:39.744404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.640 [2024-12-07 10:44:39.744414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.640 [2024-12-07 10:44:39.744547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.065 ms, result 0 00:32:40.640 true 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84708 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84708 ']' 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84708 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84708 00:32:40.641 killing process with pid 84708 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84708' 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84708 00:32:40.641 10:44:39 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84708 00:32:45.923 10:44:44 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:49.214 262144+0 records in 00:32:49.214 262144+0 records out 00:32:49.214 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.83311 s, 280 MB/s 00:32:49.214 10:44:48 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:51.123 10:44:50 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:51.123 [2024-12-07 10:44:50.249002] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:51.123 [2024-12-07 10:44:50.249306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84939 ] 00:32:51.123 [2024-12-07 10:44:50.429473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.384 [2024-12-07 10:44:50.561791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.644 [2024-12-07 10:44:50.959182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:51.644 [2024-12-07 10:44:50.959256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:51.904 [2024-12-07 10:44:51.121440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.904 [2024-12-07 10:44:51.121492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:51.904 [2024-12-07 10:44:51.121507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:51.904 [2024-12-07 10:44:51.121533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.904 [2024-12-07 10:44:51.121580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.904 [2024-12-07 10:44:51.121595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:51.904 [2024-12-07 10:44:51.121606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:51.904 [2024-12-07 10:44:51.121616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.904 [2024-12-07 10:44:51.121637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:51.904 [2024-12-07 10:44:51.122572] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:51.904 [2024-12-07 10:44:51.122602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.904 [2024-12-07 10:44:51.122613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:51.904 [2024-12-07 10:44:51.122624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:32:51.904 [2024-12-07 10:44:51.122643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.904 [2024-12-07 10:44:51.124070] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:51.905 [2024-12-07 10:44:51.143058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.143099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:51.905 [2024-12-07 10:44:51.143113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.020 ms 00:32:51.905 [2024-12-07 10:44:51.143123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.143191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.143204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:51.905 [2024-12-07 10:44:51.143214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:51.905 [2024-12-07 10:44:51.143224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.150155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.150183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:51.905 [2024-12-07 10:44:51.150194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.871 ms 00:32:51.905 [2024-12-07 10:44:51.150224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.150301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.150314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:51.905 [2024-12-07 10:44:51.150324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:51.905 [2024-12-07 10:44:51.150334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.150373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.150385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:51.905 [2024-12-07 10:44:51.150395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:51.905 [2024-12-07 10:44:51.150404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.150435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:51.905 [2024-12-07 10:44:51.155251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.155286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:51.905 [2024-12-07 10:44:51.155302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:32:51.905 [2024-12-07 10:44:51.155312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.155345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.155356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:51.905 [2024-12-07 10:44:51.155366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:51.905 [2024-12-07 10:44:51.155376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.155427] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:51.905 [2024-12-07 10:44:51.155451] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:51.905 [2024-12-07 10:44:51.155484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:51.905 [2024-12-07 10:44:51.155504] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:51.905 [2024-12-07 10:44:51.155590] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:51.905 [2024-12-07 10:44:51.155603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:51.905 [2024-12-07 10:44:51.155616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:51.905 [2024-12-07 10:44:51.155628] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:51.905 [2024-12-07 10:44:51.155640] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:51.905 [2024-12-07 10:44:51.155651] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:51.905 [2024-12-07 10:44:51.155661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:51.905 [2024-12-07 10:44:51.155674] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:51.905 [2024-12-07 10:44:51.155684] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:51.905 [2024-12-07 10:44:51.155693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.155703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:51.905 [2024-12-07 10:44:51.155713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:32:51.905 [2024-12-07 10:44:51.155723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.155792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.905 [2024-12-07 10:44:51.155804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:51.905 [2024-12-07 10:44:51.155814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:51.905 [2024-12-07 10:44:51.155823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.905 [2024-12-07 10:44:51.155917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:51.905 [2024-12-07 10:44:51.155932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:51.905 [2024-12-07 10:44:51.155943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:51.905 [2024-12-07 10:44:51.155952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.905 [2024-12-07 10:44:51.155962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:51.905 [2024-12-07 10:44:51.155971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:51.905 [2024-12-07 10:44:51.155994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:51.905 [2024-12-07 10:44:51.156004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:51.905 [2024-12-07 10:44:51.156014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:51.905 [2024-12-07 10:44:51.156023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:51.905 [2024-12-07 10:44:51.156033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:51.905 [2024-12-07 10:44:51.156043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:51.905 [2024-12-07 10:44:51.156052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:51.905 [2024-12-07 10:44:51.156070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:51.905 [2024-12-07 10:44:51.156080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:51.905 [2024-12-07 10:44:51.156089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.905 [2024-12-07 10:44:51.156098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:51.905 [2024-12-07 10:44:51.156107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:51.905 [2024-12-07 10:44:51.156117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.905 [2024-12-07 10:44:51.156126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:51.905 [2024-12-07 10:44:51.156136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:51.905 [2024-12-07 10:44:51.156144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.905 [2024-12-07 10:44:51.156153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:51.905 [2024-12-07 10:44:51.156163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:51.905 [2024-12-07 10:44:51.156172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.905 [2024-12-07 10:44:51.156180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:51.905 [2024-12-07 10:44:51.156190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:51.905 [2024-12-07 10:44:51.156198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.905 [2024-12-07 10:44:51.156206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:51.905 [2024-12-07 10:44:51.156216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:51.906 [2024-12-07 10:44:51.156224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.906 [2024-12-07 10:44:51.156233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:51.906 [2024-12-07 10:44:51.156242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:51.906 [2024-12-07 10:44:51.156250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:51.906 [2024-12-07 10:44:51.156259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:51.906 [2024-12-07 10:44:51.156268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:51.906 [2024-12-07 10:44:51.156276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:51.906 [2024-12-07 10:44:51.156285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:51.906 [2024-12-07 10:44:51.156294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:51.906 [2024-12-07 10:44:51.156303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.906 [2024-12-07 10:44:51.156312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:51.906 [2024-12-07 10:44:51.156321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:51.906 [2024-12-07 10:44:51.156332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.906 [2024-12-07 10:44:51.156341] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:51.906 [2024-12-07 10:44:51.156350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:51.906 [2024-12-07 10:44:51.156360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:51.906 [2024-12-07 10:44:51.156369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.906 [2024-12-07 10:44:51.156379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:51.906 [2024-12-07 10:44:51.156389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:51.906 [2024-12-07 10:44:51.156398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:51.906 [2024-12-07 10:44:51.156407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:51.906 [2024-12-07 10:44:51.156416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:51.906 [2024-12-07 10:44:51.156425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:51.906 [2024-12-07 10:44:51.156436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:51.906 [2024-12-07 10:44:51.156448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:51.906 [2024-12-07 10:44:51.156474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:51.906 [2024-12-07 10:44:51.156485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:51.906 [2024-12-07 10:44:51.156495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:51.906 [2024-12-07 10:44:51.156506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:51.906 [2024-12-07 10:44:51.156516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:51.906 [2024-12-07 10:44:51.156526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:51.906 [2024-12-07 10:44:51.156536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:51.906 [2024-12-07 10:44:51.156546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:51.906 [2024-12-07 10:44:51.156556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:51.906 [2024-12-07 10:44:51.156605] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:51.906 [2024-12-07 10:44:51.156616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:51.906 [2024-12-07 10:44:51.156638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:51.906 [2024-12-07 10:44:51.156647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:51.906 [2024-12-07 10:44:51.156661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:51.906 [2024-12-07 10:44:51.156673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.906 [2024-12-07 10:44:51.156684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:51.906 [2024-12-07 10:44:51.156694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:32:51.906 [2024-12-07 10:44:51.156703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.906 [2024-12-07 10:44:51.191378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.906 [2024-12-07 10:44:51.191539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:51.906 [2024-12-07 10:44:51.191645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.684 ms 00:32:51.906 [2024-12-07 10:44:51.191690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.906 [2024-12-07 10:44:51.191798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.906 [2024-12-07 10:44:51.191830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:51.906 [2024-12-07 10:44:51.191860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:51.906 [2024-12-07 10:44:51.191944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.166 [2024-12-07 10:44:51.263092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.166 [2024-12-07 10:44:51.263253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:52.166 [2024-12-07 10:44:51.263417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.155 ms 00:32:52.167 [2024-12-07 10:44:51.263457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.263520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.263601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:52.167 [2024-12-07 10:44:51.263645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:52.167 [2024-12-07 10:44:51.263674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.264260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.264367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:52.167 [2024-12-07 10:44:51.264434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:32:52.167 [2024-12-07 10:44:51.264468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.264612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.264665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:52.167 [2024-12-07 10:44:51.264738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:32:52.167 [2024-12-07 10:44:51.264768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.283266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.283389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:52.167 [2024-12-07 10:44:51.283482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:32:52.167 [2024-12-07 10:44:51.283518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.301241] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:52.167 [2024-12-07 10:44:51.301406] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:52.167 [2024-12-07 10:44:51.301524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.301557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:52.167 [2024-12-07 10:44:51.301587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.910 ms 00:32:52.167 [2024-12-07 10:44:51.301616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.329577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.329721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:52.167 [2024-12-07 10:44:51.329857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.947 ms 00:32:52.167 [2024-12-07 10:44:51.329894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.347124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.347247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:52.167 [2024-12-07 10:44:51.347339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.189 ms 00:32:52.167 [2024-12-07 10:44:51.347372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.364296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.364418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:52.167 [2024-12-07 10:44:51.364588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.834 ms 00:32:52.167 [2024-12-07 10:44:51.364622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.365369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.365484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:52.167 [2024-12-07 10:44:51.365557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:32:52.167 [2024-12-07 10:44:51.365596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.445911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.446166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:52.167 [2024-12-07 10:44:51.446254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.399 ms 00:32:52.167 [2024-12-07 10:44:51.446277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.457052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:52.167 [2024-12-07 10:44:51.459434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.459465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:52.167 [2024-12-07 10:44:51.459478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.132 ms 00:32:52.167 [2024-12-07 10:44:51.459487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.459557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.459570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:52.167 [2024-12-07 10:44:51.459580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:52.167 [2024-12-07 10:44:51.459590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.459663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.459675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:52.167 [2024-12-07 10:44:51.459685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:52.167 [2024-12-07 10:44:51.459695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.459714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.459724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:52.167 [2024-12-07 10:44:51.459734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.167 [2024-12-07 10:44:51.459743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.459777] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:52.167 [2024-12-07 10:44:51.459791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.459801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:52.167 [2024-12-07 10:44:51.459810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:52.167 [2024-12-07 10:44:51.459820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.493882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.493922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:52.167 [2024-12-07 10:44:51.493935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.100 ms 00:32:52.167 [2024-12-07 10:44:51.493951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.494031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.167 [2024-12-07 10:44:51.494043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:52.167 [2024-12-07 10:44:51.494054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:52.167 [2024-12-07 10:44:51.494063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.167 [2024-12-07 10:44:51.495201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.898 ms, result 0 00:32:53.547  [2024-12-07T10:44:53.838Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-07T10:44:54.776Z] Copying: 46/1024 [MB] (22 MBps) [2024-12-07T10:44:55.721Z] Copying: 71/1024 [MB] (24 MBps) [2024-12-07T10:44:56.654Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-07T10:44:57.588Z] Copying: 118/1024 [MB] (23 MBps) [2024-12-07T10:44:58.535Z] Copying: 142/1024 [MB] (23 MBps) [2024-12-07T10:44:59.497Z] Copying: 165/1024 [MB] (23 MBps) [2024-12-07T10:45:00.877Z] Copying: 189/1024 [MB] (24 MBps) [2024-12-07T10:45:01.819Z] Copying: 213/1024 [MB] (23 MBps) [2024-12-07T10:45:02.756Z] Copying: 237/1024 [MB] (24 MBps) [2024-12-07T10:45:03.692Z] Copying: 261/1024 [MB] (23 MBps) [2024-12-07T10:45:04.627Z] Copying: 282/1024 [MB] (21 MBps) [2024-12-07T10:45:05.561Z] Copying: 306/1024 [MB] (23 MBps) [2024-12-07T10:45:06.495Z] Copying: 330/1024 [MB] (24 MBps) [2024-12-07T10:45:07.868Z] Copying: 354/1024 [MB] (24 MBps) [2024-12-07T10:45:08.801Z] Copying: 378/1024 [MB] (23 MBps) [2024-12-07T10:45:09.737Z] Copying: 401/1024 [MB] (22 MBps) [2024-12-07T10:45:10.675Z] Copying: 426/1024 [MB] (24 MBps) [2024-12-07T10:45:11.614Z] Copying: 451/1024 [MB] (25 MBps) [2024-12-07T10:45:12.551Z] Copying: 475/1024 [MB] (24 MBps) [2024-12-07T10:45:13.490Z] Copying: 499/1024 [MB] (23 MBps) [2024-12-07T10:45:14.870Z] Copying: 522/1024 [MB] (23 MBps) [2024-12-07T10:45:15.806Z] Copying: 547/1024 [MB] (24 MBps) [2024-12-07T10:45:16.743Z] Copying: 571/1024 [MB] (23 MBps) [2024-12-07T10:45:17.681Z] Copying: 594/1024 [MB] (22 MBps) [2024-12-07T10:45:18.619Z] Copying: 617/1024 [MB] (23 MBps) [2024-12-07T10:45:19.558Z] Copying: 641/1024 [MB] (23 MBps) [2024-12-07T10:45:20.498Z] Copying: 664/1024 [MB] (23 MBps) [2024-12-07T10:45:21.882Z] Copying: 688/1024 [MB] (23 MBps) [2024-12-07T10:45:22.818Z] Copying: 713/1024 [MB] (24 MBps) [2024-12-07T10:45:23.753Z] Copying: 737/1024 [MB] (24 MBps) [2024-12-07T10:45:24.689Z] Copying: 761/1024 [MB] (24 MBps) [2024-12-07T10:45:25.626Z] Copying: 785/1024 [MB] (23 MBps) [2024-12-07T10:45:26.561Z] Copying: 809/1024 [MB] (23 MBps) [2024-12-07T10:45:27.514Z] Copying: 832/1024 [MB] (23 MBps) [2024-12-07T10:45:28.466Z] Copying: 855/1024 [MB] (23 MBps) [2024-12-07T10:45:29.844Z] Copying: 879/1024 [MB] (23 MBps) [2024-12-07T10:45:30.782Z] Copying: 902/1024 [MB] (23 MBps) [2024-12-07T10:45:31.720Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-07T10:45:32.658Z] Copying: 950/1024 [MB] (23 MBps) [2024-12-07T10:45:33.595Z] Copying: 973/1024 [MB] (23 MBps) [2024-12-07T10:45:34.534Z] Copying: 998/1024 [MB] (24 MBps) [2024-12-07T10:45:34.534Z] Copying: 1022/1024 [MB] (23 MBps) [2024-12-07T10:45:34.534Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-07 10:45:34.509095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.181 [2024-12-07 10:45:34.509139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:35.181 [2024-12-07 10:45:34.509156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:35.181 [2024-12-07 10:45:34.509167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.181 [2024-12-07 10:45:34.509189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:35.181 [2024-12-07 10:45:34.513474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.181 [2024-12-07 10:45:34.513507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:35.181 [2024-12-07 10:45:34.513525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:33:35.181 [2024-12-07 10:45:34.513535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.181 [2024-12-07 10:45:34.515680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.181 [2024-12-07 10:45:34.515827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:35.181 [2024-12-07 10:45:34.515848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:33:35.181 [2024-12-07 10:45:34.515859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.181 [2024-12-07 10:45:34.515893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.181 [2024-12-07 10:45:34.515903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:35.182 [2024-12-07 10:45:34.515914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:35.182 [2024-12-07 10:45:34.515923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.182 [2024-12-07 10:45:34.515992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.182 [2024-12-07 10:45:34.516004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:35.182 [2024-12-07 10:45:34.516015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:35.182 [2024-12-07 10:45:34.516024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.182 [2024-12-07 10:45:34.516039] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:35.182 [2024-12-07 10:45:34.516053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:35.182 [2024-12-07 10:45:34.516934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.516944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.516954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.516964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.516975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.516993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:35.183 [2024-12-07 10:45:34.517121] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:35.183 [2024-12-07 10:45:34.517131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1550b398-958a-49a9-bb53-5ab7cdf56510 00:33:35.183 [2024-12-07 10:45:34.517141] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:35.183 [2024-12-07 10:45:34.517150] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:35.183 [2024-12-07 10:45:34.517159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:35.183 [2024-12-07 10:45:34.517172] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:35.183 [2024-12-07 10:45:34.517181] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:35.183 [2024-12-07 10:45:34.517190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:35.183 [2024-12-07 10:45:34.517199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:35.183 [2024-12-07 10:45:34.517208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:35.183 [2024-12-07 10:45:34.517217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:35.183 [2024-12-07 10:45:34.517230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.183 [2024-12-07 10:45:34.517240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:35.183 [2024-12-07 10:45:34.517250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:33:35.183 [2024-12-07 10:45:34.517259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.536733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.443 [2024-12-07 10:45:34.536772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:35.443 [2024-12-07 10:45:34.536784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.490 ms 00:33:35.443 [2024-12-07 10:45:34.536793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.537346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.443 [2024-12-07 10:45:34.537360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:35.443 [2024-12-07 10:45:34.537370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:33:35.443 [2024-12-07 10:45:34.537380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.585931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.443 [2024-12-07 10:45:34.585967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:35.443 [2024-12-07 10:45:34.586009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.443 [2024-12-07 10:45:34.586019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.586070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.443 [2024-12-07 10:45:34.586080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:35.443 [2024-12-07 10:45:34.586091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.443 [2024-12-07 10:45:34.586120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.586171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.443 [2024-12-07 10:45:34.586189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:35.443 [2024-12-07 10:45:34.586199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.443 [2024-12-07 10:45:34.586208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.586224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.443 [2024-12-07 10:45:34.586234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:35.443 [2024-12-07 10:45:34.586246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.443 [2024-12-07 10:45:34.586256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.443 [2024-12-07 10:45:34.702146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.443 [2024-12-07 10:45:34.702203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:35.443 [2024-12-07 10:45:34.702216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.443 [2024-12-07 10:45:34.702242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.796655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.796851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:35.703 [2024-12-07 10:45:34.796874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.796885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.796973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.797006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:35.703 [2024-12-07 10:45:34.797025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.797035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.797074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.797086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:35.703 [2024-12-07 10:45:34.797097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.797108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.797205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.797218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:35.703 [2024-12-07 10:45:34.797240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.797253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.797284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.797296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:35.703 [2024-12-07 10:45:34.797306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.797316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.797353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.797364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:35.703 [2024-12-07 10:45:34.797374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.797387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.797430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.703 [2024-12-07 10:45:34.797442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:35.703 [2024-12-07 10:45:34.797452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.703 [2024-12-07 10:45:34.797462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.703 [2024-12-07 10:45:34.797581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 288.920 ms, result 0 00:33:37.078 00:33:37.078 00:33:37.078 10:45:36 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:37.078 [2024-12-07 10:45:36.329295] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:37.078 [2024-12-07 10:45:36.329572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85388 ] 00:33:37.336 [2024-12-07 10:45:36.509861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.336 [2024-12-07 10:45:36.623052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.903 [2024-12-07 10:45:36.954699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:37.903 [2024-12-07 10:45:36.954975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:37.903 [2024-12-07 10:45:37.114618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.114839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:37.903 [2024-12-07 10:45:37.114932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:37.903 [2024-12-07 10:45:37.114968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.115069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.115109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:37.903 [2024-12-07 10:45:37.115139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:37.903 [2024-12-07 10:45:37.115231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.115288] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:37.903 [2024-12-07 10:45:37.116309] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:37.903 [2024-12-07 10:45:37.116451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.116525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:37.903 [2024-12-07 10:45:37.116559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:33:37.903 [2024-12-07 10:45:37.116588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.116947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:37.903 [2024-12-07 10:45:37.117099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.117150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:37.903 [2024-12-07 10:45:37.117224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:33:37.903 [2024-12-07 10:45:37.117258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.117355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.117439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:37.903 [2024-12-07 10:45:37.117519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:37.903 [2024-12-07 10:45:37.117548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.118036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.118147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:37.903 [2024-12-07 10:45:37.118247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:33:37.903 [2024-12-07 10:45:37.118281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.118378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.118472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:37.903 [2024-12-07 10:45:37.118527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:37.903 [2024-12-07 10:45:37.118555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.118601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.118646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:37.903 [2024-12-07 10:45:37.118681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:37.903 [2024-12-07 10:45:37.118710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.118769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:37.903 [2024-12-07 10:45:37.124004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.124119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:37.903 [2024-12-07 10:45:37.124213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.247 ms 00:33:37.903 [2024-12-07 10:45:37.124247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.124311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.124344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:37.903 [2024-12-07 10:45:37.124373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:37.903 [2024-12-07 10:45:37.124401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.124473] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:37.903 [2024-12-07 10:45:37.124635] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:37.903 [2024-12-07 10:45:37.124673] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:37.903 [2024-12-07 10:45:37.124689] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:37.903 [2024-12-07 10:45:37.124776] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:37.903 [2024-12-07 10:45:37.124788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:37.903 [2024-12-07 10:45:37.124802] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:37.903 [2024-12-07 10:45:37.124814] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:37.903 [2024-12-07 10:45:37.124826] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:37.903 [2024-12-07 10:45:37.124840] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:37.903 [2024-12-07 10:45:37.124849] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:37.903 [2024-12-07 10:45:37.124859] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:37.903 [2024-12-07 10:45:37.124869] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:37.903 [2024-12-07 10:45:37.124879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.124889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:37.903 [2024-12-07 10:45:37.124899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:33:37.903 [2024-12-07 10:45:37.124909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.124994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.903 [2024-12-07 10:45:37.125022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:37.903 [2024-12-07 10:45:37.125032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:37.903 [2024-12-07 10:45:37.125045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.903 [2024-12-07 10:45:37.125135] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:37.903 [2024-12-07 10:45:37.125150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:37.903 [2024-12-07 10:45:37.125161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:37.903 [2024-12-07 10:45:37.125193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:37.903 [2024-12-07 10:45:37.125221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:37.903 [2024-12-07 10:45:37.125239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:37.903 [2024-12-07 10:45:37.125249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:37.903 [2024-12-07 10:45:37.125258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:37.903 [2024-12-07 10:45:37.125267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:37.903 [2024-12-07 10:45:37.125276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:37.903 [2024-12-07 10:45:37.125295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:37.903 [2024-12-07 10:45:37.125315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:37.903 [2024-12-07 10:45:37.125343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:37.903 [2024-12-07 10:45:37.125370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:37.903 [2024-12-07 10:45:37.125398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:37.903 [2024-12-07 10:45:37.125424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.903 [2024-12-07 10:45:37.125443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:37.903 [2024-12-07 10:45:37.125452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:37.903 [2024-12-07 10:45:37.125461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:37.903 [2024-12-07 10:45:37.125470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:37.903 [2024-12-07 10:45:37.125479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:37.903 [2024-12-07 10:45:37.125489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:37.904 [2024-12-07 10:45:37.125498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:37.904 [2024-12-07 10:45:37.125507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:37.904 [2024-12-07 10:45:37.125516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.904 [2024-12-07 10:45:37.125525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:37.904 [2024-12-07 10:45:37.125533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:37.904 [2024-12-07 10:45:37.125542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.904 [2024-12-07 10:45:37.125551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:37.904 [2024-12-07 10:45:37.125561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:37.904 [2024-12-07 10:45:37.125571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:37.904 [2024-12-07 10:45:37.125581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.904 [2024-12-07 10:45:37.125594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:37.904 [2024-12-07 10:45:37.125604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:37.904 [2024-12-07 10:45:37.125612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:37.904 [2024-12-07 10:45:37.125622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:37.904 [2024-12-07 10:45:37.125631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:37.904 [2024-12-07 10:45:37.125641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:37.904 [2024-12-07 10:45:37.125651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:37.904 [2024-12-07 10:45:37.125663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:37.904 [2024-12-07 10:45:37.125686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:37.904 [2024-12-07 10:45:37.125696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:37.904 [2024-12-07 10:45:37.125707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:37.904 [2024-12-07 10:45:37.125717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:37.904 [2024-12-07 10:45:37.125727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:37.904 [2024-12-07 10:45:37.125737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:37.904 [2024-12-07 10:45:37.125747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:37.904 [2024-12-07 10:45:37.125758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:37.904 [2024-12-07 10:45:37.125768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:37.904 [2024-12-07 10:45:37.125819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:37.904 [2024-12-07 10:45:37.125830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:37.904 [2024-12-07 10:45:37.125851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:37.904 [2024-12-07 10:45:37.125861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:37.904 [2024-12-07 10:45:37.125871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:37.904 [2024-12-07 10:45:37.125882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.125892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:37.904 [2024-12-07 10:45:37.125902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:33:37.904 [2024-12-07 10:45:37.125912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.159963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.160009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:37.904 [2024-12-07 10:45:37.160022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.066 ms 00:33:37.904 [2024-12-07 10:45:37.160033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.160104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.160118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:37.904 [2024-12-07 10:45:37.160128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:37.904 [2024-12-07 10:45:37.160138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.212333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.212368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:37.904 [2024-12-07 10:45:37.212382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.230 ms 00:33:37.904 [2024-12-07 10:45:37.212395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.212429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.212440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:37.904 [2024-12-07 10:45:37.212450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:37.904 [2024-12-07 10:45:37.212459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.212573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.212585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:37.904 [2024-12-07 10:45:37.212595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:37.904 [2024-12-07 10:45:37.212610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.212711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.212723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:37.904 [2024-12-07 10:45:37.212733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:33:37.904 [2024-12-07 10:45:37.212743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.232745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.232877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:37.904 [2024-12-07 10:45:37.232915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.014 ms 00:33:37.904 [2024-12-07 10:45:37.232925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.233087] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:37.904 [2024-12-07 10:45:37.233103] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:37.904 [2024-12-07 10:45:37.233120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.233130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:37.904 [2024-12-07 10:45:37.233142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:37.904 [2024-12-07 10:45:37.233152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.243673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.243795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:37.904 [2024-12-07 10:45:37.243832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.520 ms 00:33:37.904 [2024-12-07 10:45:37.243842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.243956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.243967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:37.904 [2024-12-07 10:45:37.243983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:33:37.904 [2024-12-07 10:45:37.244007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.244059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.244071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:37.904 [2024-12-07 10:45:37.244091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:37.904 [2024-12-07 10:45:37.244101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.244755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.244770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:37.904 [2024-12-07 10:45:37.244780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:33:37.904 [2024-12-07 10:45:37.244795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.904 [2024-12-07 10:45:37.244813] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:37.904 [2024-12-07 10:45:37.244826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.904 [2024-12-07 10:45:37.244836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:37.904 [2024-12-07 10:45:37.244846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:37.904 [2024-12-07 10:45:37.244855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.256816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:38.163 [2024-12-07 10:45:37.257010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.257040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:38.163 [2024-12-07 10:45:37.257052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 00:33:38.163 [2024-12-07 10:45:37.257062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.258869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.258901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:38.163 [2024-12-07 10:45:37.258913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:33:38.163 [2024-12-07 10:45:37.258923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.259026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.259040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:38.163 [2024-12-07 10:45:37.259051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:38.163 [2024-12-07 10:45:37.259062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.259091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.259102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:38.163 [2024-12-07 10:45:37.259112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:38.163 [2024-12-07 10:45:37.259122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.259161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:38.163 [2024-12-07 10:45:37.259174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.259183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:38.163 [2024-12-07 10:45:37.259194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:38.163 [2024-12-07 10:45:37.259204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.296363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.296519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:38.163 [2024-12-07 10:45:37.296541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.197 ms 00:33:38.163 [2024-12-07 10:45:37.296552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.296620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.163 [2024-12-07 10:45:37.296631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:38.163 [2024-12-07 10:45:37.296642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:38.163 [2024-12-07 10:45:37.296652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.163 [2024-12-07 10:45:37.298001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 183.246 ms, result 0 00:33:39.559  [2024-12-07T10:45:39.847Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-07T10:45:40.782Z] Copying: 50/1024 [MB] (24 MBps) [2024-12-07T10:45:41.716Z] Copying: 75/1024 [MB] (25 MBps) [2024-12-07T10:45:42.654Z] Copying: 101/1024 [MB] (25 MBps) [2024-12-07T10:45:43.590Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-07T10:45:44.527Z] Copying: 150/1024 [MB] (24 MBps) [2024-12-07T10:45:45.904Z] Copying: 175/1024 [MB] (24 MBps) [2024-12-07T10:45:46.839Z] Copying: 199/1024 [MB] (24 MBps) [2024-12-07T10:45:47.773Z] Copying: 225/1024 [MB] (25 MBps) [2024-12-07T10:45:48.707Z] Copying: 250/1024 [MB] (25 MBps) [2024-12-07T10:45:49.641Z] Copying: 276/1024 [MB] (25 MBps) [2024-12-07T10:45:50.575Z] Copying: 303/1024 [MB] (26 MBps) [2024-12-07T10:45:51.510Z] Copying: 329/1024 [MB] (26 MBps) [2024-12-07T10:45:52.886Z] Copying: 354/1024 [MB] (24 MBps) [2024-12-07T10:45:53.823Z] Copying: 379/1024 [MB] (25 MBps) [2024-12-07T10:45:54.758Z] Copying: 403/1024 [MB] (24 MBps) [2024-12-07T10:45:55.719Z] Copying: 429/1024 [MB] (25 MBps) [2024-12-07T10:45:56.729Z] Copying: 455/1024 [MB] (25 MBps) [2024-12-07T10:45:57.665Z] Copying: 480/1024 [MB] (24 MBps) [2024-12-07T10:45:58.603Z] Copying: 504/1024 [MB] (24 MBps) [2024-12-07T10:45:59.540Z] Copying: 528/1024 [MB] (24 MBps) [2024-12-07T10:46:00.474Z] Copying: 552/1024 [MB] (23 MBps) [2024-12-07T10:46:01.852Z] Copying: 576/1024 [MB] (23 MBps) [2024-12-07T10:46:02.791Z] Copying: 600/1024 [MB] (24 MBps) [2024-12-07T10:46:03.727Z] Copying: 625/1024 [MB] (24 MBps) [2024-12-07T10:46:04.665Z] Copying: 649/1024 [MB] (24 MBps) [2024-12-07T10:46:05.605Z] Copying: 674/1024 [MB] (24 MBps) [2024-12-07T10:46:06.539Z] Copying: 698/1024 [MB] (23 MBps) [2024-12-07T10:46:07.476Z] Copying: 722/1024 [MB] (24 MBps) [2024-12-07T10:46:08.854Z] Copying: 748/1024 [MB] (25 MBps) [2024-12-07T10:46:09.794Z] Copying: 773/1024 [MB] (25 MBps) [2024-12-07T10:46:10.733Z] Copying: 798/1024 [MB] (24 MBps) [2024-12-07T10:46:11.669Z] Copying: 823/1024 [MB] (24 MBps) [2024-12-07T10:46:12.605Z] Copying: 847/1024 [MB] (24 MBps) [2024-12-07T10:46:13.544Z] Copying: 871/1024 [MB] (24 MBps) [2024-12-07T10:46:14.483Z] Copying: 895/1024 [MB] (24 MBps) [2024-12-07T10:46:15.862Z] Copying: 920/1024 [MB] (24 MBps) [2024-12-07T10:46:16.807Z] Copying: 943/1024 [MB] (23 MBps) [2024-12-07T10:46:17.742Z] Copying: 968/1024 [MB] (24 MBps) [2024-12-07T10:46:18.681Z] Copying: 991/1024 [MB] (23 MBps) [2024-12-07T10:46:18.941Z] Copying: 1015/1024 [MB] (23 MBps) [2024-12-07T10:46:19.202Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-07 10:46:18.991890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.849 [2024-12-07 10:46:18.992053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:19.849 [2024-12-07 10:46:18.992105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:19.849 [2024-12-07 10:46:18.992142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.849 [2024-12-07 10:46:18.992233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:19.850 [2024-12-07 10:46:19.002553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.850 [2024-12-07 10:46:19.002629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:19.850 [2024-12-07 10:46:19.002658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.283 ms 00:34:19.850 [2024-12-07 10:46:19.002681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.850 [2024-12-07 10:46:19.003113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.850 [2024-12-07 10:46:19.003145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:19.850 [2024-12-07 10:46:19.003169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:34:19.850 [2024-12-07 10:46:19.003190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.850 [2024-12-07 10:46:19.003255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.850 [2024-12-07 10:46:19.003279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:19.850 [2024-12-07 10:46:19.003303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:19.850 [2024-12-07 10:46:19.003324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.850 [2024-12-07 10:46:19.003416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.850 [2024-12-07 10:46:19.003441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:19.850 [2024-12-07 10:46:19.003464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:19.850 [2024-12-07 10:46:19.003484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.850 [2024-12-07 10:46:19.003516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:19.850 [2024-12-07 10:46:19.003553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.003967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.004968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:19.850 [2024-12-07 10:46:19.005166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:19.851 [2024-12-07 10:46:19.005882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:19.851 [2024-12-07 10:46:19.005904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1550b398-958a-49a9-bb53-5ab7cdf56510 00:34:19.851 [2024-12-07 10:46:19.005927] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:19.851 [2024-12-07 10:46:19.005947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:19.851 [2024-12-07 10:46:19.005968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:19.851 [2024-12-07 10:46:19.006786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:19.851 [2024-12-07 10:46:19.007262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:19.851 [2024-12-07 10:46:19.007403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:19.851 [2024-12-07 10:46:19.007530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:19.851 [2024-12-07 10:46:19.007742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:19.851 [2024-12-07 10:46:19.007810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:19.851 [2024-12-07 10:46:19.008059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.851 [2024-12-07 10:46:19.008135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:19.851 [2024-12-07 10:46:19.008213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:34:19.851 [2024-12-07 10:46:19.008287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.851 [2024-12-07 10:46:19.030180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.851 [2024-12-07 10:46:19.030327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:19.851 [2024-12-07 10:46:19.030448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.867 ms 00:34:19.851 [2024-12-07 10:46:19.030489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.851 [2024-12-07 10:46:19.031152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.851 [2024-12-07 10:46:19.031274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:19.851 [2024-12-07 10:46:19.031352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:34:19.851 [2024-12-07 10:46:19.031390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.851 [2024-12-07 10:46:19.083001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.851 [2024-12-07 10:46:19.083160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:19.851 [2024-12-07 10:46:19.083249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.851 [2024-12-07 10:46:19.083288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.851 [2024-12-07 10:46:19.083375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.851 [2024-12-07 10:46:19.083419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:19.851 [2024-12-07 10:46:19.083454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.851 [2024-12-07 10:46:19.083536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.851 [2024-12-07 10:46:19.083637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.851 [2024-12-07 10:46:19.083679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:19.851 [2024-12-07 10:46:19.083714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.851 [2024-12-07 10:46:19.083747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.851 [2024-12-07 10:46:19.083851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.851 [2024-12-07 10:46:19.083891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:19.851 [2024-12-07 10:46:19.083934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.851 [2024-12-07 10:46:19.083968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.110 [2024-12-07 10:46:19.211155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.110 [2024-12-07 10:46:19.211398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:20.110 [2024-12-07 10:46:19.211554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.110 [2024-12-07 10:46:19.211598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.110 [2024-12-07 10:46:19.312478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.312676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:20.111 [2024-12-07 10:46:19.312775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.312814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.312959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.313016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:20.111 [2024-12-07 10:46:19.313111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.313137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.313228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.313243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:20.111 [2024-12-07 10:46:19.313256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.313274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.313388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.313404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:20.111 [2024-12-07 10:46:19.313418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.313430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.313469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.313484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:20.111 [2024-12-07 10:46:19.313497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.313509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.313566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.313579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:20.111 [2024-12-07 10:46:19.313592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.313604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.313665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:20.111 [2024-12-07 10:46:19.313680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:20.111 [2024-12-07 10:46:19.313693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:20.111 [2024-12-07 10:46:19.313711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.111 [2024-12-07 10:46:19.313871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 322.483 ms, result 0 00:34:21.488 00:34:21.488 00:34:21.488 10:46:20 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:22.862 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:22.863 10:46:22 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:22.863 [2024-12-07 10:46:22.215408] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:23.121 [2024-12-07 10:46:22.215723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85845 ] 00:34:23.121 [2024-12-07 10:46:22.395546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.379 [2024-12-07 10:46:22.527618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.637 [2024-12-07 10:46:22.952091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:23.637 [2024-12-07 10:46:22.952182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:23.896 [2024-12-07 10:46:23.118366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.896 [2024-12-07 10:46:23.118428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:23.896 [2024-12-07 10:46:23.118448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:23.896 [2024-12-07 10:46:23.118460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.896 [2024-12-07 10:46:23.118515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.896 [2024-12-07 10:46:23.118533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:23.896 [2024-12-07 10:46:23.118546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:23.896 [2024-12-07 10:46:23.118558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.896 [2024-12-07 10:46:23.118584] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:23.896 [2024-12-07 10:46:23.119548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:23.896 [2024-12-07 10:46:23.119587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.896 [2024-12-07 10:46:23.119599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:23.896 [2024-12-07 10:46:23.119612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:34:23.896 [2024-12-07 10:46:23.119624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.896 [2024-12-07 10:46:23.120078] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:23.896 [2024-12-07 10:46:23.120104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.896 [2024-12-07 10:46:23.120121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:23.896 [2024-12-07 10:46:23.120134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:23.896 [2024-12-07 10:46:23.120146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.896 [2024-12-07 10:46:23.120211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.120224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:23.897 [2024-12-07 10:46:23.120236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:34:23.897 [2024-12-07 10:46:23.120247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.120689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.120704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:23.897 [2024-12-07 10:46:23.120716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:34:23.897 [2024-12-07 10:46:23.120727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.120807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.120823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:23.897 [2024-12-07 10:46:23.120835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:23.897 [2024-12-07 10:46:23.120846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.120873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.120886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:23.897 [2024-12-07 10:46:23.120902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:23.897 [2024-12-07 10:46:23.120914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.120940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:23.897 [2024-12-07 10:46:23.127128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.127332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:23.897 [2024-12-07 10:46:23.127502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:34:23.897 [2024-12-07 10:46:23.127545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.127620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.127738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:23.897 [2024-12-07 10:46:23.127781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:23.897 [2024-12-07 10:46:23.127816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.127955] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:23.897 [2024-12-07 10:46:23.128166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:23.897 [2024-12-07 10:46:23.128356] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:23.897 [2024-12-07 10:46:23.128488] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:23.897 [2024-12-07 10:46:23.128641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:23.897 [2024-12-07 10:46:23.128852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:23.897 [2024-12-07 10:46:23.128914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:23.897 [2024-12-07 10:46:23.128973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:23.897 [2024-12-07 10:46:23.129106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:23.897 [2024-12-07 10:46:23.129173] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:23.897 [2024-12-07 10:46:23.129208] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:23.897 [2024-12-07 10:46:23.129242] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:23.897 [2024-12-07 10:46:23.129320] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:23.897 [2024-12-07 10:46:23.129412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.129451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:23.897 [2024-12-07 10:46:23.129536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:34:23.897 [2024-12-07 10:46:23.129573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.129684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.897 [2024-12-07 10:46:23.129821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:23.897 [2024-12-07 10:46:23.129860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:23.897 [2024-12-07 10:46:23.129901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.897 [2024-12-07 10:46:23.130046] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:23.897 [2024-12-07 10:46:23.130247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:23.897 [2024-12-07 10:46:23.130289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:23.897 [2024-12-07 10:46:23.130394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:23.897 [2024-12-07 10:46:23.130686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:23.897 [2024-12-07 10:46:23.130709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:23.897 [2024-12-07 10:46:23.130720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:23.897 [2024-12-07 10:46:23.130731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:23.897 [2024-12-07 10:46:23.130741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:23.897 [2024-12-07 10:46:23.130752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:23.897 [2024-12-07 10:46:23.130776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:23.897 [2024-12-07 10:46:23.130798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:23.897 [2024-12-07 10:46:23.130829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:23.897 [2024-12-07 10:46:23.130860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:23.897 [2024-12-07 10:46:23.130891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:23.897 [2024-12-07 10:46:23.130922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:23.897 [2024-12-07 10:46:23.130942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:23.897 [2024-12-07 10:46:23.130953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:23.897 [2024-12-07 10:46:23.130963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:23.897 [2024-12-07 10:46:23.130974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:23.897 [2024-12-07 10:46:23.130998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:23.897 [2024-12-07 10:46:23.131010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:23.897 [2024-12-07 10:46:23.131022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:23.897 [2024-12-07 10:46:23.131032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:23.897 [2024-12-07 10:46:23.131043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.131053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:23.897 [2024-12-07 10:46:23.131064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:23.897 [2024-12-07 10:46:23.131074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.131086] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:23.897 [2024-12-07 10:46:23.131098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:23.897 [2024-12-07 10:46:23.131110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:23.897 [2024-12-07 10:46:23.131121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:23.897 [2024-12-07 10:46:23.131137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:23.897 [2024-12-07 10:46:23.131147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:23.897 [2024-12-07 10:46:23.131158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:23.897 [2024-12-07 10:46:23.131169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:23.897 [2024-12-07 10:46:23.131180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:23.897 [2024-12-07 10:46:23.131191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:23.897 [2024-12-07 10:46:23.131205] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:23.897 [2024-12-07 10:46:23.131221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:23.897 [2024-12-07 10:46:23.131234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:23.897 [2024-12-07 10:46:23.131247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:23.897 [2024-12-07 10:46:23.131259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:23.897 [2024-12-07 10:46:23.131271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:23.898 [2024-12-07 10:46:23.131283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:23.898 [2024-12-07 10:46:23.131295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:23.898 [2024-12-07 10:46:23.131306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:23.898 [2024-12-07 10:46:23.131318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:23.898 [2024-12-07 10:46:23.131329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:23.898 [2024-12-07 10:46:23.131340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:23.898 [2024-12-07 10:46:23.131351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:23.898 [2024-12-07 10:46:23.131362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:23.898 [2024-12-07 10:46:23.131374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:23.898 [2024-12-07 10:46:23.131387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:23.898 [2024-12-07 10:46:23.131398] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:23.898 [2024-12-07 10:46:23.131411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:23.898 [2024-12-07 10:46:23.131424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:23.898 [2024-12-07 10:46:23.131437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:23.898 [2024-12-07 10:46:23.131449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:23.898 [2024-12-07 10:46:23.131461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:23.898 [2024-12-07 10:46:23.131475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.898 [2024-12-07 10:46:23.131487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:23.898 [2024-12-07 10:46:23.131499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:34:23.898 [2024-12-07 10:46:23.131511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.898 [2024-12-07 10:46:23.175593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.898 [2024-12-07 10:46:23.175743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:23.898 [2024-12-07 10:46:23.175822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.097 ms 00:34:23.898 [2024-12-07 10:46:23.175860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.898 [2024-12-07 10:46:23.175962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.898 [2024-12-07 10:46:23.176018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:23.898 [2024-12-07 10:46:23.176060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:34:23.898 [2024-12-07 10:46:23.176092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.156 [2024-12-07 10:46:23.257263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.156 [2024-12-07 10:46:23.257417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:24.156 [2024-12-07 10:46:23.257530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.144 ms 00:34:24.156 [2024-12-07 10:46:23.257570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.156 [2024-12-07 10:46:23.257647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.156 [2024-12-07 10:46:23.257683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:24.156 [2024-12-07 10:46:23.257717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:24.156 [2024-12-07 10:46:23.257750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.156 [2024-12-07 10:46:23.257998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.156 [2024-12-07 10:46:23.258050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:24.156 [2024-12-07 10:46:23.258146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:34:24.156 [2024-12-07 10:46:23.258183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.156 [2024-12-07 10:46:23.258349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.156 [2024-12-07 10:46:23.258466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:24.156 [2024-12-07 10:46:23.258559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:34:24.156 [2024-12-07 10:46:23.258610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.156 [2024-12-07 10:46:23.282138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.282281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:24.157 [2024-12-07 10:46:23.282305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.508 ms 00:34:24.157 [2024-12-07 10:46:23.282332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.282489] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:24.157 [2024-12-07 10:46:23.282506] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:24.157 [2024-12-07 10:46:23.282526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.282539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:24.157 [2024-12-07 10:46:23.282552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:34:24.157 [2024-12-07 10:46:23.282564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.292963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.293012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:24.157 [2024-12-07 10:46:23.293027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.392 ms 00:34:24.157 [2024-12-07 10:46:23.293039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.293164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.293179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:24.157 [2024-12-07 10:46:23.293192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:34:24.157 [2024-12-07 10:46:23.293212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.293270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.293284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:24.157 [2024-12-07 10:46:23.293311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:24.157 [2024-12-07 10:46:23.293322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.294026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.294047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:24.157 [2024-12-07 10:46:23.294060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:34:24.157 [2024-12-07 10:46:23.294072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.294110] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:24.157 [2024-12-07 10:46:23.294125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.294138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:24.157 [2024-12-07 10:46:23.294152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:24.157 [2024-12-07 10:46:23.294164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.307040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:24.157 [2024-12-07 10:46:23.307392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.307414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:24.157 [2024-12-07 10:46:23.307429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.226 ms 00:34:24.157 [2024-12-07 10:46:23.307442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.309379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.309416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:24.157 [2024-12-07 10:46:23.309430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.913 ms 00:34:24.157 [2024-12-07 10:46:23.309443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.309545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.309559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:24.157 [2024-12-07 10:46:23.309572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:24.157 [2024-12-07 10:46:23.309585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.309617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.309637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:24.157 [2024-12-07 10:46:23.309648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:24.157 [2024-12-07 10:46:23.309660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.309703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:24.157 [2024-12-07 10:46:23.309718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.309729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:24.157 [2024-12-07 10:46:23.309741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:24.157 [2024-12-07 10:46:23.309753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.346083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.346233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:24.157 [2024-12-07 10:46:23.346314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.364 ms 00:34:24.157 [2024-12-07 10:46:23.346352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.346463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.157 [2024-12-07 10:46:23.346506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:24.157 [2024-12-07 10:46:23.346541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:24.157 [2024-12-07 10:46:23.346649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.157 [2024-12-07 10:46:23.348222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 229.672 ms, result 0 00:34:25.170  [2024-12-07T10:46:25.464Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-07T10:46:26.398Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-07T10:46:27.771Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-07T10:46:28.705Z] Copying: 91/1024 [MB] (23 MBps) [2024-12-07T10:46:29.641Z] Copying: 115/1024 [MB] (24 MBps) [2024-12-07T10:46:30.581Z] Copying: 139/1024 [MB] (24 MBps) [2024-12-07T10:46:31.520Z] Copying: 162/1024 [MB] (22 MBps) [2024-12-07T10:46:32.458Z] Copying: 185/1024 [MB] (23 MBps) [2024-12-07T10:46:33.394Z] Copying: 209/1024 [MB] (23 MBps) [2024-12-07T10:46:34.772Z] Copying: 233/1024 [MB] (23 MBps) [2024-12-07T10:46:35.706Z] Copying: 256/1024 [MB] (23 MBps) [2024-12-07T10:46:36.639Z] Copying: 280/1024 [MB] (23 MBps) [2024-12-07T10:46:37.578Z] Copying: 303/1024 [MB] (23 MBps) [2024-12-07T10:46:38.516Z] Copying: 326/1024 [MB] (23 MBps) [2024-12-07T10:46:39.454Z] Copying: 350/1024 [MB] (23 MBps) [2024-12-07T10:46:40.394Z] Copying: 373/1024 [MB] (23 MBps) [2024-12-07T10:46:41.333Z] Copying: 396/1024 [MB] (22 MBps) [2024-12-07T10:46:42.713Z] Copying: 419/1024 [MB] (23 MBps) [2024-12-07T10:46:43.651Z] Copying: 442/1024 [MB] (23 MBps) [2024-12-07T10:46:44.589Z] Copying: 465/1024 [MB] (22 MBps) [2024-12-07T10:46:45.525Z] Copying: 488/1024 [MB] (23 MBps) [2024-12-07T10:46:46.459Z] Copying: 512/1024 [MB] (24 MBps) [2024-12-07T10:46:47.394Z] Copying: 536/1024 [MB] (23 MBps) [2024-12-07T10:46:48.326Z] Copying: 559/1024 [MB] (23 MBps) [2024-12-07T10:46:49.699Z] Copying: 583/1024 [MB] (23 MBps) [2024-12-07T10:46:50.633Z] Copying: 607/1024 [MB] (23 MBps) [2024-12-07T10:46:51.567Z] Copying: 631/1024 [MB] (23 MBps) [2024-12-07T10:46:52.500Z] Copying: 654/1024 [MB] (23 MBps) [2024-12-07T10:46:53.459Z] Copying: 678/1024 [MB] (24 MBps) [2024-12-07T10:46:54.406Z] Copying: 701/1024 [MB] (22 MBps) [2024-12-07T10:46:55.344Z] Copying: 725/1024 [MB] (23 MBps) [2024-12-07T10:46:56.718Z] Copying: 749/1024 [MB] (24 MBps) [2024-12-07T10:46:57.653Z] Copying: 774/1024 [MB] (24 MBps) [2024-12-07T10:46:58.589Z] Copying: 798/1024 [MB] (24 MBps) [2024-12-07T10:46:59.523Z] Copying: 822/1024 [MB] (24 MBps) [2024-12-07T10:47:00.459Z] Copying: 846/1024 [MB] (24 MBps) [2024-12-07T10:47:01.395Z] Copying: 871/1024 [MB] (24 MBps) [2024-12-07T10:47:02.331Z] Copying: 896/1024 [MB] (24 MBps) [2024-12-07T10:47:03.710Z] Copying: 921/1024 [MB] (24 MBps) [2024-12-07T10:47:04.647Z] Copying: 944/1024 [MB] (23 MBps) [2024-12-07T10:47:05.584Z] Copying: 968/1024 [MB] (23 MBps) [2024-12-07T10:47:06.518Z] Copying: 991/1024 [MB] (23 MBps) [2024-12-07T10:47:07.455Z] Copying: 1014/1024 [MB] (22 MBps) [2024-12-07T10:47:07.715Z] Copying: 1048284/1048576 [kB] (9076 kBps) [2024-12-07T10:47:07.715Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-07 10:47:07.560709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.362 [2024-12-07 10:47:07.560776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:08.362 [2024-12-07 10:47:07.560794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:08.362 [2024-12-07 10:47:07.560806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.362 [2024-12-07 10:47:07.561888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:08.362 [2024-12-07 10:47:07.567978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.362 [2024-12-07 10:47:07.568030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:08.362 [2024-12-07 10:47:07.568045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:35:08.362 [2024-12-07 10:47:07.568055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.362 [2024-12-07 10:47:07.575791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.362 [2024-12-07 10:47:07.575833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:08.362 [2024-12-07 10:47:07.575846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.724 ms 00:35:08.362 [2024-12-07 10:47:07.575857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.362 [2024-12-07 10:47:07.575886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.362 [2024-12-07 10:47:07.575898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:08.362 [2024-12-07 10:47:07.575908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:08.362 [2024-12-07 10:47:07.575918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.362 [2024-12-07 10:47:07.575992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.362 [2024-12-07 10:47:07.576008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:08.362 [2024-12-07 10:47:07.576019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:35:08.363 [2024-12-07 10:47:07.576029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.363 [2024-12-07 10:47:07.576052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:08.363 [2024-12-07 10:47:07.576066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:35:08.363 [2024-12-07 10:47:07.576079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:08.363 [2024-12-07 10:47:07.576816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.576981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:08.364 [2024-12-07 10:47:07.577166] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:08.364 [2024-12-07 10:47:07.577176] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1550b398-958a-49a9-bb53-5ab7cdf56510 00:35:08.364 [2024-12-07 10:47:07.577187] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:35:08.364 [2024-12-07 10:47:07.577197] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:35:08.364 [2024-12-07 10:47:07.577206] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:35:08.364 [2024-12-07 10:47:07.577216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:35:08.364 [2024-12-07 10:47:07.577230] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:08.364 [2024-12-07 10:47:07.577241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:08.364 [2024-12-07 10:47:07.577251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:08.364 [2024-12-07 10:47:07.577259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:08.364 [2024-12-07 10:47:07.577268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:08.364 [2024-12-07 10:47:07.577278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.364 [2024-12-07 10:47:07.577288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:08.364 [2024-12-07 10:47:07.577300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:35:08.364 [2024-12-07 10:47:07.577309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.364 [2024-12-07 10:47:07.596850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.364 [2024-12-07 10:47:07.596888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:08.364 [2024-12-07 10:47:07.596908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.553 ms 00:35:08.364 [2024-12-07 10:47:07.596919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.364 [2024-12-07 10:47:07.597498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.364 [2024-12-07 10:47:07.597518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:08.364 [2024-12-07 10:47:07.597530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:35:08.364 [2024-12-07 10:47:07.597540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.364 [2024-12-07 10:47:07.648876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.364 [2024-12-07 10:47:07.648915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:08.364 [2024-12-07 10:47:07.648928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.364 [2024-12-07 10:47:07.648938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.364 [2024-12-07 10:47:07.649003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.364 [2024-12-07 10:47:07.649016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:08.364 [2024-12-07 10:47:07.649027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.364 [2024-12-07 10:47:07.649037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.364 [2024-12-07 10:47:07.649111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.364 [2024-12-07 10:47:07.649125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:08.364 [2024-12-07 10:47:07.649141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.364 [2024-12-07 10:47:07.649152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.364 [2024-12-07 10:47:07.649169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.364 [2024-12-07 10:47:07.649181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:08.364 [2024-12-07 10:47:07.649191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.364 [2024-12-07 10:47:07.649201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.768520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.768597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:08.624 [2024-12-07 10:47:07.768613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.768624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.863722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.863777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:08.624 [2024-12-07 10:47:07.863791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.863801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.863895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.863907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:08.624 [2024-12-07 10:47:07.863918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.863931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.863968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.863994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:08.624 [2024-12-07 10:47:07.864005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.864031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.864109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.864123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:08.624 [2024-12-07 10:47:07.864134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.864144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.864180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.864193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:08.624 [2024-12-07 10:47:07.864203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.864213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.864251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.864262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:08.624 [2024-12-07 10:47:07.864271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.864281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.864329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.624 [2024-12-07 10:47:07.864341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:08.624 [2024-12-07 10:47:07.864351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.624 [2024-12-07 10:47:07.864360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.624 [2024-12-07 10:47:07.864501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 306.664 ms, result 0 00:35:10.002 00:35:10.002 00:35:10.002 10:47:09 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:10.002 [2024-12-07 10:47:09.305239] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:10.002 [2024-12-07 10:47:09.305368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86310 ] 00:35:10.261 [2024-12-07 10:47:09.483303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.261 [2024-12-07 10:47:09.591605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.831 [2024-12-07 10:47:09.949876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:10.831 [2024-12-07 10:47:09.950185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:10.831 [2024-12-07 10:47:10.110911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.110969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:10.831 [2024-12-07 10:47:10.111016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:10.831 [2024-12-07 10:47:10.111027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.111076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.111091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:10.831 [2024-12-07 10:47:10.111119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:35:10.831 [2024-12-07 10:47:10.111129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.111151] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:10.831 [2024-12-07 10:47:10.112214] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:10.831 [2024-12-07 10:47:10.112241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.112252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:10.831 [2024-12-07 10:47:10.112263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:35:10.831 [2024-12-07 10:47:10.112272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.112613] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:10.831 [2024-12-07 10:47:10.112634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.112649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:10.831 [2024-12-07 10:47:10.112660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:35:10.831 [2024-12-07 10:47:10.112670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.112731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.112742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:10.831 [2024-12-07 10:47:10.112753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:35:10.831 [2024-12-07 10:47:10.112763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.113206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.113220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:10.831 [2024-12-07 10:47:10.113231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:35:10.831 [2024-12-07 10:47:10.113241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.113312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.113325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:10.831 [2024-12-07 10:47:10.113335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:35:10.831 [2024-12-07 10:47:10.113345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.113368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.113379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:10.831 [2024-12-07 10:47:10.113393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:10.831 [2024-12-07 10:47:10.113403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.113424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:10.831 [2024-12-07 10:47:10.118682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.118713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:10.831 [2024-12-07 10:47:10.118724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.271 ms 00:35:10.831 [2024-12-07 10:47:10.118734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.118785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.118796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:10.831 [2024-12-07 10:47:10.118806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:10.831 [2024-12-07 10:47:10.118816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.118871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:10.831 [2024-12-07 10:47:10.118896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:10.831 [2024-12-07 10:47:10.118933] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:10.831 [2024-12-07 10:47:10.118950] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:10.831 [2024-12-07 10:47:10.119053] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:10.831 [2024-12-07 10:47:10.119068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:10.831 [2024-12-07 10:47:10.119080] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:10.831 [2024-12-07 10:47:10.119094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119121] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:10.831 [2024-12-07 10:47:10.119131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:10.831 [2024-12-07 10:47:10.119141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:10.831 [2024-12-07 10:47:10.119151] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:10.831 [2024-12-07 10:47:10.119161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.119171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:10.831 [2024-12-07 10:47:10.119181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:35:10.831 [2024-12-07 10:47:10.119191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.119265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.831 [2024-12-07 10:47:10.119275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:10.831 [2024-12-07 10:47:10.119285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:35:10.831 [2024-12-07 10:47:10.119299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.831 [2024-12-07 10:47:10.119391] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:10.831 [2024-12-07 10:47:10.119406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:10.831 [2024-12-07 10:47:10.119418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:10.831 [2024-12-07 10:47:10.119457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:10.831 [2024-12-07 10:47:10.119487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:10.831 [2024-12-07 10:47:10.119506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:10.831 [2024-12-07 10:47:10.119516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:10.831 [2024-12-07 10:47:10.119525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:10.831 [2024-12-07 10:47:10.119534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:10.831 [2024-12-07 10:47:10.119544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:10.831 [2024-12-07 10:47:10.119561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:10.831 [2024-12-07 10:47:10.119580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:10.831 [2024-12-07 10:47:10.119607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:10.831 [2024-12-07 10:47:10.119635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:10.831 [2024-12-07 10:47:10.119662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:10.831 [2024-12-07 10:47:10.119690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:10.831 [2024-12-07 10:47:10.119728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:10.831 [2024-12-07 10:47:10.119746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:10.831 [2024-12-07 10:47:10.119755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:10.831 [2024-12-07 10:47:10.119764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:10.831 [2024-12-07 10:47:10.119773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:10.831 [2024-12-07 10:47:10.119782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:10.831 [2024-12-07 10:47:10.119791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:10.831 [2024-12-07 10:47:10.119808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:10.831 [2024-12-07 10:47:10.119819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119828] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:10.831 [2024-12-07 10:47:10.119837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:10.831 [2024-12-07 10:47:10.119847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:10.831 [2024-12-07 10:47:10.119856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.831 [2024-12-07 10:47:10.119869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:10.831 [2024-12-07 10:47:10.119878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:10.831 [2024-12-07 10:47:10.119888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:10.832 [2024-12-07 10:47:10.119897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:10.832 [2024-12-07 10:47:10.119906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:10.832 [2024-12-07 10:47:10.119915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:10.832 [2024-12-07 10:47:10.119925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:10.832 [2024-12-07 10:47:10.119937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.119948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:10.832 [2024-12-07 10:47:10.119958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:10.832 [2024-12-07 10:47:10.119968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:10.832 [2024-12-07 10:47:10.119977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:10.832 [2024-12-07 10:47:10.119998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:10.832 [2024-12-07 10:47:10.120009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:10.832 [2024-12-07 10:47:10.120019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:10.832 [2024-12-07 10:47:10.120029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:10.832 [2024-12-07 10:47:10.120039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:10.832 [2024-12-07 10:47:10.120049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.120059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.120069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.120079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.120090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:10.832 [2024-12-07 10:47:10.120100] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:10.832 [2024-12-07 10:47:10.120110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.120129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:10.832 [2024-12-07 10:47:10.120139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:10.832 [2024-12-07 10:47:10.120149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:10.832 [2024-12-07 10:47:10.120160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:10.832 [2024-12-07 10:47:10.120171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.832 [2024-12-07 10:47:10.120181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:10.832 [2024-12-07 10:47:10.120191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:35:10.832 [2024-12-07 10:47:10.120201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.832 [2024-12-07 10:47:10.156111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.832 [2024-12-07 10:47:10.156258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:10.832 [2024-12-07 10:47:10.156298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.926 ms 00:35:10.832 [2024-12-07 10:47:10.156309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.832 [2024-12-07 10:47:10.156387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.832 [2024-12-07 10:47:10.156398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:10.832 [2024-12-07 10:47:10.156414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:35:10.832 [2024-12-07 10:47:10.156424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.090 [2024-12-07 10:47:10.221387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.090 [2024-12-07 10:47:10.221525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:11.090 [2024-12-07 10:47:10.221564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.009 ms 00:35:11.090 [2024-12-07 10:47:10.221576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.090 [2024-12-07 10:47:10.221621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.090 [2024-12-07 10:47:10.221632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:11.090 [2024-12-07 10:47:10.221643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:11.090 [2024-12-07 10:47:10.221653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.090 [2024-12-07 10:47:10.221779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.090 [2024-12-07 10:47:10.221792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:11.090 [2024-12-07 10:47:10.221802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:35:11.090 [2024-12-07 10:47:10.221812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.090 [2024-12-07 10:47:10.221930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.090 [2024-12-07 10:47:10.221943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:11.090 [2024-12-07 10:47:10.221953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:35:11.090 [2024-12-07 10:47:10.221963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.241451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.241485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:11.091 [2024-12-07 10:47:10.241500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.501 ms 00:35:11.091 [2024-12-07 10:47:10.241527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.241661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:11.091 [2024-12-07 10:47:10.241676] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:11.091 [2024-12-07 10:47:10.241692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.241703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:11.091 [2024-12-07 10:47:10.241715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:11.091 [2024-12-07 10:47:10.241724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.253006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.253041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:11.091 [2024-12-07 10:47:10.253053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.282 ms 00:35:11.091 [2024-12-07 10:47:10.253064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.253179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.253191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:11.091 [2024-12-07 10:47:10.253202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:35:11.091 [2024-12-07 10:47:10.253217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.253269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.253282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:11.091 [2024-12-07 10:47:10.253292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:11.091 [2024-12-07 10:47:10.253312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.253959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.253999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:11.091 [2024-12-07 10:47:10.254011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:35:11.091 [2024-12-07 10:47:10.254021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.254047] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:11.091 [2024-12-07 10:47:10.254061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.254071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:11.091 [2024-12-07 10:47:10.254082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:11.091 [2024-12-07 10:47:10.254091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.266107] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:11.091 [2024-12-07 10:47:10.266314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.266328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:11.091 [2024-12-07 10:47:10.266340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.222 ms 00:35:11.091 [2024-12-07 10:47:10.266350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.268248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.268385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:11.091 [2024-12-07 10:47:10.268405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:35:11.091 [2024-12-07 10:47:10.268416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.268499] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:11.091 [2024-12-07 10:47:10.268893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.268905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:11.091 [2024-12-07 10:47:10.268916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:35:11.091 [2024-12-07 10:47:10.268926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.268956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.268967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:11.091 [2024-12-07 10:47:10.268997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:11.091 [2024-12-07 10:47:10.269007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.269042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:11.091 [2024-12-07 10:47:10.269055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.269064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:11.091 [2024-12-07 10:47:10.269075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:11.091 [2024-12-07 10:47:10.269084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.304429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.304465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:11.091 [2024-12-07 10:47:10.304478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.382 ms 00:35:11.091 [2024-12-07 10:47:10.304488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.304553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.091 [2024-12-07 10:47:10.304564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:11.091 [2024-12-07 10:47:10.304575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:11.091 [2024-12-07 10:47:10.304584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.091 [2024-12-07 10:47:10.305702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 194.665 ms, result 0 00:35:12.466  [2024-12-07T10:47:12.755Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-07T10:47:13.695Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-07T10:47:14.633Z] Copying: 75/1024 [MB] (25 MBps) [2024-12-07T10:47:15.570Z] Copying: 101/1024 [MB] (25 MBps) [2024-12-07T10:47:16.948Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-07T10:47:17.518Z] Copying: 152/1024 [MB] (25 MBps) [2024-12-07T10:47:18.897Z] Copying: 177/1024 [MB] (25 MBps) [2024-12-07T10:47:19.831Z] Copying: 203/1024 [MB] (25 MBps) [2024-12-07T10:47:20.765Z] Copying: 228/1024 [MB] (24 MBps) [2024-12-07T10:47:21.763Z] Copying: 252/1024 [MB] (24 MBps) [2024-12-07T10:47:22.711Z] Copying: 277/1024 [MB] (24 MBps) [2024-12-07T10:47:23.649Z] Copying: 302/1024 [MB] (25 MBps) [2024-12-07T10:47:24.585Z] Copying: 327/1024 [MB] (24 MBps) [2024-12-07T10:47:25.522Z] Copying: 351/1024 [MB] (24 MBps) [2024-12-07T10:47:26.900Z] Copying: 375/1024 [MB] (24 MBps) [2024-12-07T10:47:27.842Z] Copying: 400/1024 [MB] (24 MBps) [2024-12-07T10:47:28.781Z] Copying: 426/1024 [MB] (25 MBps) [2024-12-07T10:47:29.719Z] Copying: 451/1024 [MB] (25 MBps) [2024-12-07T10:47:30.657Z] Copying: 477/1024 [MB] (25 MBps) [2024-12-07T10:47:31.595Z] Copying: 502/1024 [MB] (25 MBps) [2024-12-07T10:47:32.535Z] Copying: 528/1024 [MB] (25 MBps) [2024-12-07T10:47:33.916Z] Copying: 552/1024 [MB] (24 MBps) [2024-12-07T10:47:34.484Z] Copying: 577/1024 [MB] (25 MBps) [2024-12-07T10:47:35.860Z] Copying: 603/1024 [MB] (25 MBps) [2024-12-07T10:47:36.794Z] Copying: 628/1024 [MB] (25 MBps) [2024-12-07T10:47:37.729Z] Copying: 653/1024 [MB] (25 MBps) [2024-12-07T10:47:38.663Z] Copying: 678/1024 [MB] (24 MBps) [2024-12-07T10:47:39.596Z] Copying: 703/1024 [MB] (24 MBps) [2024-12-07T10:47:40.529Z] Copying: 728/1024 [MB] (25 MBps) [2024-12-07T10:47:41.901Z] Copying: 754/1024 [MB] (26 MBps) [2024-12-07T10:47:42.833Z] Copying: 780/1024 [MB] (26 MBps) [2024-12-07T10:47:43.769Z] Copying: 806/1024 [MB] (25 MBps) [2024-12-07T10:47:44.703Z] Copying: 831/1024 [MB] (24 MBps) [2024-12-07T10:47:45.640Z] Copying: 857/1024 [MB] (26 MBps) [2024-12-07T10:47:46.573Z] Copying: 883/1024 [MB] (26 MBps) [2024-12-07T10:47:47.511Z] Copying: 910/1024 [MB] (26 MBps) [2024-12-07T10:47:48.892Z] Copying: 935/1024 [MB] (25 MBps) [2024-12-07T10:47:49.461Z] Copying: 961/1024 [MB] (25 MBps) [2024-12-07T10:47:50.524Z] Copying: 986/1024 [MB] (25 MBps) [2024-12-07T10:47:51.094Z] Copying: 1012/1024 [MB] (25 MBps) [2024-12-07T10:47:51.094Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-07 10:47:51.060005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.741 [2024-12-07 10:47:51.060072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:51.741 [2024-12-07 10:47:51.060095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:51.741 [2024-12-07 10:47:51.060208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.741 [2024-12-07 10:47:51.060241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:51.741 [2024-12-07 10:47:51.067332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.741 [2024-12-07 10:47:51.067385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:51.741 [2024-12-07 10:47:51.067408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.078 ms 00:35:51.741 [2024-12-07 10:47:51.067436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.741 [2024-12-07 10:47:51.067781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.741 [2024-12-07 10:47:51.067804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:51.741 [2024-12-07 10:47:51.067823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:35:51.741 [2024-12-07 10:47:51.067842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.741 [2024-12-07 10:47:51.067902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.741 [2024-12-07 10:47:51.067923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:51.741 [2024-12-07 10:47:51.067942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:51.741 [2024-12-07 10:47:51.067961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.741 [2024-12-07 10:47:51.068064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.741 [2024-12-07 10:47:51.068092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:51.741 [2024-12-07 10:47:51.068111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:35:51.741 [2024-12-07 10:47:51.068129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.741 [2024-12-07 10:47:51.068157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:51.741 [2024-12-07 10:47:51.068189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:51.741 [2024-12-07 10:47:51.068213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:51.741 [2024-12-07 10:47:51.068509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.068994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:51.742 [2024-12-07 10:47:51.069911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.069931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.069950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.069970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:51.743 [2024-12-07 10:47:51.070234] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:51.743 [2024-12-07 10:47:51.070253] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1550b398-958a-49a9-bb53-5ab7cdf56510 00:35:51.743 [2024-12-07 10:47:51.070273] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:51.743 [2024-12-07 10:47:51.070291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:35:51.743 [2024-12-07 10:47:51.070310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:35:51.743 [2024-12-07 10:47:51.070335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:35:51.743 [2024-12-07 10:47:51.070353] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:51.743 [2024-12-07 10:47:51.070371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:51.743 [2024-12-07 10:47:51.070390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:51.743 [2024-12-07 10:47:51.070406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:51.743 [2024-12-07 10:47:51.070423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:51.743 [2024-12-07 10:47:51.070441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.743 [2024-12-07 10:47:51.070460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:51.743 [2024-12-07 10:47:51.070479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:35:51.743 [2024-12-07 10:47:51.070496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.092891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.003 [2024-12-07 10:47:51.093064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:52.003 [2024-12-07 10:47:51.093153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.399 ms 00:35:52.003 [2024-12-07 10:47:51.093195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.093732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.003 [2024-12-07 10:47:51.093832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:52.003 [2024-12-07 10:47:51.093902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:35:52.003 [2024-12-07 10:47:51.093938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.143383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.003 [2024-12-07 10:47:51.143524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:52.003 [2024-12-07 10:47:51.143608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.003 [2024-12-07 10:47:51.143643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.143718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.003 [2024-12-07 10:47:51.143750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:52.003 [2024-12-07 10:47:51.143823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.003 [2024-12-07 10:47:51.143857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.143936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.003 [2024-12-07 10:47:51.143988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:52.003 [2024-12-07 10:47:51.144022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.003 [2024-12-07 10:47:51.144103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.144149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.003 [2024-12-07 10:47:51.144180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:52.003 [2024-12-07 10:47:51.144210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.003 [2024-12-07 10:47:51.144239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.003 [2024-12-07 10:47:51.260895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.003 [2024-12-07 10:47:51.261114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:52.003 [2024-12-07 10:47:51.261367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.003 [2024-12-07 10:47:51.261405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.356863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.357055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:52.263 [2024-12-07 10:47:51.357146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.357183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.357300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.357340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:52.263 [2024-12-07 10:47:51.357378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.357466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.357541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.357575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:52.263 [2024-12-07 10:47:51.357606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.357635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.357761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.357798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:52.263 [2024-12-07 10:47:51.357835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.357873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.357926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.357963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:52.263 [2024-12-07 10:47:51.358075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.358119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.358185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.358218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:52.263 [2024-12-07 10:47:51.358252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.358350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.358416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.263 [2024-12-07 10:47:51.358449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:52.263 [2024-12-07 10:47:51.358479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.263 [2024-12-07 10:47:51.358563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.263 [2024-12-07 10:47:51.358729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 299.203 ms, result 0 00:35:53.201 00:35:53.201 00:35:53.201 10:47:52 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:55.104 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:55.104 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:55.104 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:55.105 Process with pid 84708 is not found 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84708 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84708 ']' 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84708 00:35:55.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84708) - No such process 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84708 is not found' 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:55.105 Remove shared memory files 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_band_md /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_l2p_l1 /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_l2p_l2 /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_l2p_l2_ctx /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_nvc_md /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_p2l_pool /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_sb /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_sb_shm /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_trim_bitmap /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_trim_log /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_trim_md /dev/hugepages/ftl_1550b398-958a-49a9-bb53-5ab7cdf56510_vmap 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:55.105 ************************************ 00:35:55.105 END TEST ftl_restore_fast 00:35:55.105 ************************************ 00:35:55.105 00:35:55.105 real 3m24.339s 00:35:55.105 user 3m11.385s 00:35:55.105 sys 0m14.260s 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.105 10:47:54 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:55.105 10:47:54 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:55.105 10:47:54 ftl -- ftl/ftl.sh@14 -- # killprocess 76725 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 76725 ']' 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@958 -- # kill -0 76725 00:35:55.105 Process with pid 76725 is not found 00:35:55.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76725) - No such process 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76725 is not found' 00:35:55.105 10:47:54 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:55.105 10:47:54 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86770 00:35:55.105 10:47:54 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:55.105 10:47:54 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86770 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 86770 ']' 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.105 10:47:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:55.105 [2024-12-07 10:47:54.408390] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:55.105 [2024-12-07 10:47:54.408510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86770 ] 00:35:55.364 [2024-12-07 10:47:54.590549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.365 [2024-12-07 10:47:54.711333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.304 10:47:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.304 10:47:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:56.304 10:47:55 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:56.562 nvme0n1 00:35:56.562 10:47:55 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:56.562 10:47:55 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:56.562 10:47:55 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:56.821 10:47:56 ftl -- ftl/common.sh@28 -- # stores=a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2 00:35:56.821 10:47:56 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:56.821 10:47:56 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7cd6b1d-cfaf-4779-a56d-25e8c311b7a2 00:35:57.079 10:47:56 ftl -- ftl/ftl.sh@23 -- # killprocess 86770 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 86770 ']' 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@958 -- # kill -0 86770 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@959 -- # uname 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86770 00:35:57.079 killing process with pid 86770 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86770' 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@973 -- # kill 86770 00:35:57.079 10:47:56 ftl -- common/autotest_common.sh@978 -- # wait 86770 00:35:59.608 10:47:58 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:59.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:59.608 Waiting for block devices as requested 00:35:59.868 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:59.868 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:00.128 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:00.128 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:05.406 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:05.406 10:48:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:05.406 Remove shared memory files 00:36:05.406 10:48:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:05.406 10:48:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:05.406 10:48:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:05.406 10:48:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:05.406 10:48:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:05.406 10:48:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:05.406 ************************************ 00:36:05.406 END TEST ftl 00:36:05.406 ************************************ 00:36:05.406 00:36:05.406 real 15m8.583s 00:36:05.406 user 17m36.423s 00:36:05.406 sys 1m50.317s 00:36:05.406 10:48:04 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:05.406 10:48:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:05.406 10:48:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:05.406 10:48:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:05.406 10:48:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:05.406 10:48:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:05.406 10:48:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:05.406 10:48:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:05.406 10:48:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:05.406 10:48:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:05.406 10:48:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:05.406 10:48:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:05.406 10:48:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.406 10:48:04 -- common/autotest_common.sh@10 -- # set +x 00:36:05.406 10:48:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:05.406 10:48:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:05.406 10:48:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:05.406 10:48:04 -- common/autotest_common.sh@10 -- # set +x 00:36:07.943 INFO: APP EXITING 00:36:07.943 INFO: killing all VMs 00:36:07.943 INFO: killing vhost app 00:36:07.943 INFO: EXIT DONE 00:36:08.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:08.772 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:08.772 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:08.772 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:08.772 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:09.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:09.602 Cleaning 00:36:09.602 Removing: /var/run/dpdk/spdk0/config 00:36:09.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:09.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:09.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:09.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:09.602 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:09.602 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:09.602 Removing: /var/run/dpdk/spdk0 00:36:09.602 Removing: /var/run/dpdk/spdk_pid57499 00:36:09.602 Removing: /var/run/dpdk/spdk_pid57734 00:36:09.602 Removing: /var/run/dpdk/spdk_pid57969 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58073 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58122 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58257 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58275 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58485 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58601 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58704 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58831 00:36:09.602 Removing: /var/run/dpdk/spdk_pid58945 00:36:09.862 Removing: /var/run/dpdk/spdk_pid58979 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59021 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59097 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59214 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59661 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59742 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59829 00:36:09.862 Removing: /var/run/dpdk/spdk_pid59846 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60007 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60023 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60171 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60193 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60263 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60282 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60352 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60370 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60565 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60607 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60696 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60882 00:36:09.862 Removing: /var/run/dpdk/spdk_pid60977 00:36:09.862 Removing: /var/run/dpdk/spdk_pid61019 00:36:09.862 Removing: /var/run/dpdk/spdk_pid61469 00:36:09.862 Removing: /var/run/dpdk/spdk_pid61573 00:36:09.862 Removing: /var/run/dpdk/spdk_pid61693 00:36:09.862 Removing: /var/run/dpdk/spdk_pid61751 00:36:09.863 Removing: /var/run/dpdk/spdk_pid61777 00:36:09.863 Removing: /var/run/dpdk/spdk_pid61861 00:36:09.863 Removing: /var/run/dpdk/spdk_pid62508 00:36:09.863 Removing: /var/run/dpdk/spdk_pid62558 00:36:09.863 Removing: /var/run/dpdk/spdk_pid63051 00:36:09.863 Removing: /var/run/dpdk/spdk_pid63150 00:36:09.863 Removing: /var/run/dpdk/spdk_pid63280 00:36:09.863 Removing: /var/run/dpdk/spdk_pid63334 00:36:09.863 Removing: /var/run/dpdk/spdk_pid63360 00:36:09.863 Removing: /var/run/dpdk/spdk_pid63386 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65285 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65433 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65442 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65455 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65500 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65504 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65516 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65561 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65570 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65582 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65627 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65631 00:36:09.863 Removing: /var/run/dpdk/spdk_pid65643 00:36:09.863 Removing: /var/run/dpdk/spdk_pid67064 00:36:09.863 Removing: /var/run/dpdk/spdk_pid67185 00:36:09.863 Removing: /var/run/dpdk/spdk_pid68616 00:36:09.863 Removing: /var/run/dpdk/spdk_pid70378 00:36:09.863 Removing: /var/run/dpdk/spdk_pid70462 00:36:09.863 Removing: /var/run/dpdk/spdk_pid70545 00:36:10.123 Removing: /var/run/dpdk/spdk_pid70655 00:36:10.123 Removing: /var/run/dpdk/spdk_pid70752 00:36:10.123 Removing: /var/run/dpdk/spdk_pid70854 00:36:10.123 Removing: /var/run/dpdk/spdk_pid70937 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71018 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71133 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71225 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71326 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71406 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71481 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71591 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71688 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71784 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71869 00:36:10.123 Removing: /var/run/dpdk/spdk_pid71945 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72055 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72149 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72248 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72329 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72409 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72494 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72570 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72681 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72786 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72881 00:36:10.123 Removing: /var/run/dpdk/spdk_pid72970 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73052 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73133 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73208 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73322 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73419 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73569 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73875 00:36:10.123 Removing: /var/run/dpdk/spdk_pid73918 00:36:10.123 Removing: /var/run/dpdk/spdk_pid74370 00:36:10.123 Removing: /var/run/dpdk/spdk_pid74566 00:36:10.123 Removing: /var/run/dpdk/spdk_pid74673 00:36:10.123 Removing: /var/run/dpdk/spdk_pid74807 00:36:10.123 Removing: /var/run/dpdk/spdk_pid74865 00:36:10.123 Removing: /var/run/dpdk/spdk_pid74892 00:36:10.124 Removing: /var/run/dpdk/spdk_pid75191 00:36:10.124 Removing: /var/run/dpdk/spdk_pid75268 00:36:10.124 Removing: /var/run/dpdk/spdk_pid75348 00:36:10.124 Removing: /var/run/dpdk/spdk_pid75773 00:36:10.124 Removing: /var/run/dpdk/spdk_pid75919 00:36:10.124 Removing: /var/run/dpdk/spdk_pid76725 00:36:10.124 Removing: /var/run/dpdk/spdk_pid76874 00:36:10.124 Removing: /var/run/dpdk/spdk_pid77070 00:36:10.124 Removing: /var/run/dpdk/spdk_pid77179 00:36:10.124 Removing: /var/run/dpdk/spdk_pid77537 00:36:10.124 Removing: /var/run/dpdk/spdk_pid77837 00:36:10.124 Removing: /var/run/dpdk/spdk_pid78191 00:36:10.124 Removing: /var/run/dpdk/spdk_pid78396 00:36:10.384 Removing: /var/run/dpdk/spdk_pid78537 00:36:10.384 Removing: /var/run/dpdk/spdk_pid78605 00:36:10.384 Removing: /var/run/dpdk/spdk_pid78750 00:36:10.384 Removing: /var/run/dpdk/spdk_pid78781 00:36:10.384 Removing: /var/run/dpdk/spdk_pid78845 00:36:10.384 Removing: /var/run/dpdk/spdk_pid79054 00:36:10.384 Removing: /var/run/dpdk/spdk_pid79290 00:36:10.384 Removing: /var/run/dpdk/spdk_pid79747 00:36:10.384 Removing: /var/run/dpdk/spdk_pid80201 00:36:10.384 Removing: /var/run/dpdk/spdk_pid80686 00:36:10.384 Removing: /var/run/dpdk/spdk_pid81208 00:36:10.384 Removing: /var/run/dpdk/spdk_pid81367 00:36:10.384 Removing: /var/run/dpdk/spdk_pid81454 00:36:10.384 Removing: /var/run/dpdk/spdk_pid82156 00:36:10.384 Removing: /var/run/dpdk/spdk_pid82225 00:36:10.384 Removing: /var/run/dpdk/spdk_pid82709 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83095 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83618 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83746 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83806 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83870 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83926 00:36:10.384 Removing: /var/run/dpdk/spdk_pid83984 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84187 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84274 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84340 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84415 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84469 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84538 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84708 00:36:10.384 Removing: /var/run/dpdk/spdk_pid84939 00:36:10.384 Removing: /var/run/dpdk/spdk_pid85388 00:36:10.384 Removing: /var/run/dpdk/spdk_pid85845 00:36:10.384 Removing: /var/run/dpdk/spdk_pid86310 00:36:10.384 Removing: /var/run/dpdk/spdk_pid86770 00:36:10.384 Clean 00:36:10.384 10:48:09 -- common/autotest_common.sh@1453 -- # return 0 00:36:10.384 10:48:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:10.384 10:48:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.384 10:48:09 -- common/autotest_common.sh@10 -- # set +x 00:36:10.644 10:48:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:10.644 10:48:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.644 10:48:09 -- common/autotest_common.sh@10 -- # set +x 00:36:10.644 10:48:09 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:10.644 10:48:09 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:10.644 10:48:09 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:10.644 10:48:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:10.644 10:48:09 -- spdk/autotest.sh@398 -- # hostname 00:36:10.644 10:48:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:10.904 geninfo: WARNING: invalid characters removed from testname! 00:36:37.463 10:48:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:37.463 10:48:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:39.368 10:48:38 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:41.903 10:48:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:43.810 10:48:42 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:45.717 10:48:44 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:47.621 10:48:46 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:47.621 10:48:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:47.621 10:48:46 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:47.621 10:48:46 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:47.621 10:48:46 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:47.621 10:48:46 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:47.621 + [[ -n 5251 ]] 00:36:47.621 + sudo kill 5251 00:36:47.889 [Pipeline] } 00:36:47.904 [Pipeline] // timeout 00:36:47.910 [Pipeline] } 00:36:47.925 [Pipeline] // stage 00:36:47.930 [Pipeline] } 00:36:47.945 [Pipeline] // catchError 00:36:47.954 [Pipeline] stage 00:36:47.956 [Pipeline] { (Stop VM) 00:36:47.969 [Pipeline] sh 00:36:48.250 + vagrant halt 00:36:51.587 ==> default: Halting domain... 00:36:58.244 [Pipeline] sh 00:36:58.526 + vagrant destroy -f 00:37:01.062 ==> default: Removing domain... 00:37:01.643 [Pipeline] sh 00:37:01.927 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:01.939 [Pipeline] } 00:37:01.949 [Pipeline] // stage 00:37:01.953 [Pipeline] } 00:37:01.966 [Pipeline] // dir 00:37:01.969 [Pipeline] } 00:37:01.979 [Pipeline] // wrap 00:37:01.984 [Pipeline] } 00:37:01.996 [Pipeline] // catchError 00:37:02.006 [Pipeline] stage 00:37:02.008 [Pipeline] { (Epilogue) 00:37:02.020 [Pipeline] sh 00:37:02.303 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:07.587 [Pipeline] catchError 00:37:07.588 [Pipeline] { 00:37:07.600 [Pipeline] sh 00:37:07.881 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:07.881 Artifacts sizes are good 00:37:07.890 [Pipeline] } 00:37:07.904 [Pipeline] // catchError 00:37:07.914 [Pipeline] archiveArtifacts 00:37:07.921 Archiving artifacts 00:37:08.027 [Pipeline] cleanWs 00:37:08.039 [WS-CLEANUP] Deleting project workspace... 00:37:08.039 [WS-CLEANUP] Deferred wipeout is used... 00:37:08.045 [WS-CLEANUP] done 00:37:08.047 [Pipeline] } 00:37:08.062 [Pipeline] // stage 00:37:08.067 [Pipeline] } 00:37:08.080 [Pipeline] // node 00:37:08.084 [Pipeline] End of Pipeline 00:37:08.178 Finished: SUCCESS